var/home/core/zuul-output/0000755000175000017500000000000015113677044014535 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015113704037015472 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log0000644000000000000000021357470215113704027017710 0ustar rootrootAug 13 19:43:52 crc systemd[1]: Starting Kubernetes Kubelet... Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.177165 4183 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182423 4183 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182470 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182483 4183 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182492 4183 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182501 4183 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182509 4183 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182517 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182526 4183 feature_gate.go:227] unrecognized feature gate: GatewayAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182534 4183 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182542 4183 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182551 4183 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182559 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182567 4183 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182576 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182584 4183 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182592 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182600 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182608 4183 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182617 4183 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182624 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182633 4183 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182641 4183 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182650 4183 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182658 4183 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182666 4183 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182733 4183 feature_gate.go:227] unrecognized feature gate: ImagePolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182748 4183 feature_gate.go:227] unrecognized feature gate: NewOLM Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182757 4183 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182765 4183 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182858 4183 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182874 4183 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182883 4183 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182891 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182900 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182908 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182918 4183 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182926 4183 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182934 4183 feature_gate.go:227] unrecognized feature gate: MetricsServer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182943 4183 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182951 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182959 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182967 4183 feature_gate.go:227] unrecognized feature gate: Example Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182975 4183 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182984 4183 feature_gate.go:227] unrecognized feature gate: PinnedImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182992 4183 feature_gate.go:227] unrecognized feature gate: PlatformOperators Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183018 4183 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183026 4183 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183034 4183 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183042 4183 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183051 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183060 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183069 4183 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183078 4183 feature_gate.go:227] unrecognized feature gate: SignatureStores Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183088 4183 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183097 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183107 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183116 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183125 4183 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183134 4183 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183145 4183 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183412 4183 flags.go:64] FLAG: --address="0.0.0.0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183522 4183 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183535 4183 flags.go:64] FLAG: --anonymous-auth="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183543 4183 flags.go:64] FLAG: --application-metrics-count-limit="100" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183609 4183 flags.go:64] FLAG: --authentication-token-webhook="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183620 4183 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183630 4183 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183638 4183 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183645 4183 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183652 4183 flags.go:64] FLAG: --azure-container-registry-config="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183659 4183 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183667 4183 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183679 4183 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183688 4183 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183695 4183 flags.go:64] FLAG: --cgroup-root="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183701 4183 flags.go:64] FLAG: --cgroups-per-qos="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183708 4183 flags.go:64] FLAG: --client-ca-file="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183715 4183 flags.go:64] FLAG: --cloud-config="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183721 4183 flags.go:64] FLAG: --cloud-provider="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183727 4183 flags.go:64] FLAG: --cluster-dns="[]" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183740 4183 flags.go:64] FLAG: --cluster-domain="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183750 4183 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183757 4183 flags.go:64] FLAG: --config-dir="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183764 4183 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183771 4183 flags.go:64] FLAG: --container-log-max-files="5" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183835 4183 flags.go:64] FLAG: --container-log-max-size="10Mi" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183849 4183 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183858 4183 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183865 4183 flags.go:64] FLAG: --containerd-namespace="k8s.io" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183872 4183 flags.go:64] FLAG: --contention-profiling="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183879 4183 flags.go:64] FLAG: --cpu-cfs-quota="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183886 4183 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183893 4183 flags.go:64] FLAG: --cpu-manager-policy="none" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183904 4183 flags.go:64] FLAG: --cpu-manager-policy-options="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183916 4183 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183923 4183 flags.go:64] FLAG: --enable-controller-attach-detach="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183929 4183 flags.go:64] FLAG: --enable-debugging-handlers="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183939 4183 flags.go:64] FLAG: --enable-load-reader="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183946 4183 flags.go:64] FLAG: --enable-server="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183953 4183 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183970 4183 flags.go:64] FLAG: --event-burst="100" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183978 4183 flags.go:64] FLAG: --event-qps="50" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183984 4183 flags.go:64] FLAG: --event-storage-age-limit="default=0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183992 4183 flags.go:64] FLAG: --event-storage-event-limit="default=0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183998 4183 flags.go:64] FLAG: --eviction-hard="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184007 4183 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184013 4183 flags.go:64] FLAG: --eviction-minimum-reclaim="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184024 4183 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184035 4183 flags.go:64] FLAG: --eviction-soft="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184043 4183 flags.go:64] FLAG: --eviction-soft-grace-period="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184051 4183 flags.go:64] FLAG: --exit-on-lock-contention="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184058 4183 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184067 4183 flags.go:64] FLAG: --experimental-mounter-path="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184075 4183 flags.go:64] FLAG: --fail-swap-on="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184083 4183 flags.go:64] FLAG: --feature-gates="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184100 4183 flags.go:64] FLAG: --file-check-frequency="20s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184107 4183 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184114 4183 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184121 4183 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184128 4183 flags.go:64] FLAG: --healthz-port="10248" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184136 4183 flags.go:64] FLAG: --help="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184143 4183 flags.go:64] FLAG: --hostname-override="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184157 4183 flags.go:64] FLAG: --housekeeping-interval="10s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184164 4183 flags.go:64] FLAG: --http-check-frequency="20s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184171 4183 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184177 4183 flags.go:64] FLAG: --image-credential-provider-config="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184183 4183 flags.go:64] FLAG: --image-gc-high-threshold="85" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184190 4183 flags.go:64] FLAG: --image-gc-low-threshold="80" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184270 4183 flags.go:64] FLAG: --image-service-endpoint="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184285 4183 flags.go:64] FLAG: --iptables-drop-bit="15" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184301 4183 flags.go:64] FLAG: --iptables-masquerade-bit="14" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184308 4183 flags.go:64] FLAG: --keep-terminated-pod-volumes="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184315 4183 flags.go:64] FLAG: --kernel-memcg-notification="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184323 4183 flags.go:64] FLAG: --kube-api-burst="100" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184330 4183 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184336 4183 flags.go:64] FLAG: --kube-api-qps="50" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184342 4183 flags.go:64] FLAG: --kube-reserved="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184355 4183 flags.go:64] FLAG: --kube-reserved-cgroup="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184366 4183 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184373 4183 flags.go:64] FLAG: --kubelet-cgroups="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184380 4183 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184387 4183 flags.go:64] FLAG: --lock-file="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184394 4183 flags.go:64] FLAG: --log-cadvisor-usage="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184401 4183 flags.go:64] FLAG: --log-flush-frequency="5s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184408 4183 flags.go:64] FLAG: --log-json-info-buffer-size="0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184432 4183 flags.go:64] FLAG: --log-json-split-stream="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184440 4183 flags.go:64] FLAG: --logging-format="text" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184446 4183 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184455 4183 flags.go:64] FLAG: --make-iptables-util-chains="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184462 4183 flags.go:64] FLAG: --manifest-url="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184468 4183 flags.go:64] FLAG: --manifest-url-header="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184486 4183 flags.go:64] FLAG: --max-housekeeping-interval="15s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184493 4183 flags.go:64] FLAG: --max-open-files="1000000" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184502 4183 flags.go:64] FLAG: --max-pods="110" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184508 4183 flags.go:64] FLAG: --maximum-dead-containers="-1" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184516 4183 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184523 4183 flags.go:64] FLAG: --memory-manager-policy="None" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184529 4183 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184541 4183 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184550 4183 flags.go:64] FLAG: --node-ip="192.168.126.11" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184557 4183 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184575 4183 flags.go:64] FLAG: --node-status-max-images="50" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184581 4183 flags.go:64] FLAG: --node-status-update-frequency="10s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184588 4183 flags.go:64] FLAG: --oom-score-adj="-999" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184595 4183 flags.go:64] FLAG: --pod-cidr="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184611 4183 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce0319702e115e7248d135e58342ccf3f458e19c39e86dc8e79036f578ce80a4" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184623 4183 flags.go:64] FLAG: --pod-manifest-path="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184630 4183 flags.go:64] FLAG: --pod-max-pids="-1" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184637 4183 flags.go:64] FLAG: --pods-per-core="0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184644 4183 flags.go:64] FLAG: --port="10250" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184650 4183 flags.go:64] FLAG: --protect-kernel-defaults="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184657 4183 flags.go:64] FLAG: --provider-id="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184670 4183 flags.go:64] FLAG: --qos-reserved="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184681 4183 flags.go:64] FLAG: --read-only-port="10255" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184687 4183 flags.go:64] FLAG: --register-node="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184694 4183 flags.go:64] FLAG: --register-schedulable="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184701 4183 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184712 4183 flags.go:64] FLAG: --registry-burst="10" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184722 4183 flags.go:64] FLAG: --registry-qps="5" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184737 4183 flags.go:64] FLAG: --reserved-cpus="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184744 4183 flags.go:64] FLAG: --reserved-memory="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184752 4183 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184759 4183 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184765 4183 flags.go:64] FLAG: --rotate-certificates="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184878 4183 flags.go:64] FLAG: --rotate-server-certificates="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184890 4183 flags.go:64] FLAG: --runonce="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184903 4183 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184912 4183 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184919 4183 flags.go:64] FLAG: --seccomp-default="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184926 4183 flags.go:64] FLAG: --serialize-image-pulls="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184933 4183 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184940 4183 flags.go:64] FLAG: --storage-driver-db="cadvisor" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184947 4183 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184953 4183 flags.go:64] FLAG: --storage-driver-password="root" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184973 4183 flags.go:64] FLAG: --storage-driver-secure="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184982 4183 flags.go:64] FLAG: --storage-driver-table="stats" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184989 4183 flags.go:64] FLAG: --storage-driver-user="root" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184996 4183 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185003 4183 flags.go:64] FLAG: --sync-frequency="1m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185010 4183 flags.go:64] FLAG: --system-cgroups="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185017 4183 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185038 4183 flags.go:64] FLAG: --system-reserved-cgroup="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185045 4183 flags.go:64] FLAG: --tls-cert-file="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185052 4183 flags.go:64] FLAG: --tls-cipher-suites="[]" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185059 4183 flags.go:64] FLAG: --tls-min-version="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185068 4183 flags.go:64] FLAG: --tls-private-key-file="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185074 4183 flags.go:64] FLAG: --topology-manager-policy="none" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185081 4183 flags.go:64] FLAG: --topology-manager-policy-options="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185087 4183 flags.go:64] FLAG: --topology-manager-scope="container" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185102 4183 flags.go:64] FLAG: --v="2" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185116 4183 flags.go:64] FLAG: --version="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185124 4183 flags.go:64] FLAG: --vmodule="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185131 4183 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185139 4183 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185244 4183 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185258 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185265 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185272 4183 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185280 4183 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185295 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185307 4183 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185314 4183 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185322 4183 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185329 4183 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185337 4183 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185344 4183 feature_gate.go:227] unrecognized feature gate: ImagePolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185354 4183 feature_gate.go:227] unrecognized feature gate: NewOLM Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185369 4183 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185375 4183 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185381 4183 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185387 4183 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185394 4183 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185400 4183 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185406 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185411 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185423 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185432 4183 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185438 4183 feature_gate.go:227] unrecognized feature gate: MetricsServer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185444 4183 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185450 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185455 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185463 4183 feature_gate.go:227] unrecognized feature gate: Example Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185470 4183 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185476 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185482 4183 feature_gate.go:227] unrecognized feature gate: PinnedImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185494 4183 feature_gate.go:227] unrecognized feature gate: PlatformOperators Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185500 4183 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185506 4183 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185513 4183 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185520 4183 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185527 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185537 4183 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185545 4183 feature_gate.go:227] unrecognized feature gate: SignatureStores Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185552 4183 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185559 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185566 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185573 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185581 4183 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185592 4183 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185600 4183 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185607 4183 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185615 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185622 4183 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185630 4183 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185636 4183 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185642 4183 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185647 4183 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185655 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185661 4183 feature_gate.go:227] unrecognized feature gate: GatewayAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185667 4183 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185673 4183 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185678 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185684 4183 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185690 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185698 4183 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.214743 4183 server.go:487] "Kubelet version" kubeletVersion="v1.29.5+29c95f3" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.214852 4183 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214895 4183 feature_gate.go:227] unrecognized feature gate: ImagePolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214906 4183 feature_gate.go:227] unrecognized feature gate: NewOLM Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214914 4183 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214922 4183 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214932 4183 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214940 4183 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214947 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214955 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214962 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214970 4183 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214978 4183 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214986 4183 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215020 4183 feature_gate.go:227] unrecognized feature gate: MetricsServer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215030 4183 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215038 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215047 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215054 4183 feature_gate.go:227] unrecognized feature gate: Example Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215064 4183 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215070 4183 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215077 4183 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215084 4183 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215091 4183 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215098 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215106 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215113 4183 feature_gate.go:227] unrecognized feature gate: PinnedImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215120 4183 feature_gate.go:227] unrecognized feature gate: PlatformOperators Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215127 4183 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215136 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215145 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215154 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215162 4183 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215171 4183 feature_gate.go:227] unrecognized feature gate: SignatureStores Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215180 4183 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215188 4183 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215232 4183 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215247 4183 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215255 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215263 4183 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215272 4183 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215279 4183 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215288 4183 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215296 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215305 4183 feature_gate.go:227] unrecognized feature gate: GatewayAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215313 4183 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215321 4183 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215333 4183 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215341 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215348 4183 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215357 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215365 4183 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215373 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215382 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215390 4183 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215399 4183 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215407 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215416 4183 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215424 4183 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215432 4183 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215440 4183 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215449 4183 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.215458 4183 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215645 4183 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215660 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215669 4183 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215678 4183 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215686 4183 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215695 4183 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215703 4183 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215712 4183 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215719 4183 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215727 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215736 4183 feature_gate.go:227] unrecognized feature gate: GatewayAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215744 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215754 4183 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215763 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215832 4183 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215847 4183 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215855 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215864 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215873 4183 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215881 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215889 4183 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215897 4183 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215904 4183 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215913 4183 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215921 4183 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215929 4183 feature_gate.go:227] unrecognized feature gate: ImagePolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215937 4183 feature_gate.go:227] unrecognized feature gate: NewOLM Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215946 4183 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215954 4183 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215962 4183 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215971 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215979 4183 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215987 4183 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215996 4183 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216004 4183 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216012 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216021 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216029 4183 feature_gate.go:227] unrecognized feature gate: MetricsServer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216038 4183 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216048 4183 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216056 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216064 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216073 4183 feature_gate.go:227] unrecognized feature gate: Example Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216081 4183 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216089 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216098 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216106 4183 feature_gate.go:227] unrecognized feature gate: PinnedImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216114 4183 feature_gate.go:227] unrecognized feature gate: PlatformOperators Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216122 4183 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216130 4183 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216141 4183 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216149 4183 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216160 4183 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216169 4183 feature_gate.go:227] unrecognized feature gate: SignatureStores Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216177 4183 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216185 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216227 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216244 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216252 4183 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216261 4183 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.216270 4183 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.218639 4183 server.go:925] "Client rotation is on, will bootstrap in background" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.261135 4183 bootstrap.go:266] part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-06-27 13:02:31 +0000 UTC Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.264516 4183 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.268356 4183 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.269062 4183 server.go:982] "Starting client certificate rotation" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.269322 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.270038 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.305247 4183 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.348409 4183 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.354284 4183 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.355040 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.383335 4183 remote_runtime.go:143] "Validated CRI v1 runtime API" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.383439 4183 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.423604 4183 remote_image.go:111] "Validated CRI v1 image API" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.436425 4183 fs.go:132] Filesystem UUIDs: map[68d6f3e9-64e9-44a4-a1d0-311f9c629a01:/dev/vda4 6ea7ef63-bc43-49c4-9337-b3b14ffb2763:/dev/vda3 7B77-95E7:/dev/vda2] Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.436494 4183 fs.go:133] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/containers/storage/overlay-containers/b56e232756d61ee2b06c4c940f94dd2d9c1c6744eb2ba718b704bda5002ffdcc/userdata/shm:{mountpoint:/var/lib/containers/storage/overlay-containers/b56e232756d61ee2b06c4c940f94dd2d9c1c6744eb2ba718b704bda5002ffdcc/userdata/shm major:0 minor:43 fsType:tmpfs blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/40b1512db3f1e3b7db43a52c25ec16b90b1a271577cfa32a91a92a335a6d73c5/merged major:0 minor:44 fsType:overlay blockSize:0}] Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.453677 4183 manager.go:217] Machine: {Timestamp:2025-08-13 19:43:54.449606963 +0000 UTC m=+1.142271741 CPUVendorID:AuthenticAMD NumCores:6 NumPhysicalCores:1 NumSockets:6 CpuFrequency:2800000 MemoryCapacity:14635360256 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:c1bd596843fb445da20eca66471ddf66 SystemUUID:b5eaf2e9-3c86-474e-aca5-bab262204689 BootID:7bac8de7-aad0-4ed8-a9ad-c4391f6449b7 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:1463533568 Type:vfs Inodes:357308 HasInodes:true} {Device:/var/lib/containers/storage/overlay-containers/b56e232756d61ee2b06c4c940f94dd2d9c1c6744eb2ba718b704bda5002ffdcc/userdata/shm DeviceMajor:0 DeviceMinor:43 Capacity:65536000 Type:vfs Inodes:1786543 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:85294297088 Type:vfs Inodes:41680368 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:7317680128 Type:vfs Inodes:1786543 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:2927075328 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85294297088 Type:vfs Inodes:41680368 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:7317680128 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:85899345920 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:52:fd:fc:07:21:82 Speed:0 Mtu:1500} {Name:br-int MacAddress:4e:ec:11:72:80:3b Speed:0 Mtu:1400} {Name:enp2s0 MacAddress:52:fd:fc:07:21:82 Speed:-1 Mtu:1500} {Name:eth10 MacAddress:c2:6f:cd:56:e0:cc Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:b6:dc:d9:26:03:d4 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:e6:a9:95:66:6b:74 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:14635360256 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:65536 Type:Data Level:1} {Id:0 Size:65536 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0} {Id:0 Threads:[1] Caches:[{Id:1 Size:65536 Type:Data Level:1} {Id:1 Size:65536 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1} {Id:0 Threads:[2] Caches:[{Id:2 Size:65536 Type:Data Level:1} {Id:2 Size:65536 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2} {Id:0 Threads:[3] Caches:[{Id:3 Size:65536 Type:Data Level:1} {Id:3 Size:65536 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3} {Id:0 Threads:[4] Caches:[{Id:4 Size:65536 Type:Data Level:1} {Id:4 Size:65536 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4} {Id:0 Threads:[5] Caches:[{Id:5 Size:65536 Type:Data Level:1} {Id:5 Size:65536 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.455115 4183 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.455278 4183 manager.go:233] Version: {KernelVersion:5.14.0-427.22.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 416.94.202406172220-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.464008 4183 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.465562 4183 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.465947 4183 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.465986 4183 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.466525 4183 manager.go:136] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.468951 4183 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.470533 4183 state_mem.go:36] "Initialized new in-memory state store" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.471372 4183 server.go:1227] "Using root directory" path="/var/lib/kubelet" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.474413 4183 kubelet.go:406] "Attempting to sync node with API server" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.474458 4183 kubelet.go:311] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.475131 4183 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.475372 4183 kubelet.go:322] "Adding apiserver pod source" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.476751 4183 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.481718 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.482235 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.482139 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.482302 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.485825 4183 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="cri-o" version="1.29.5-5.rhaos4.16.git7032128.el9" apiVersion="v1" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.492543 4183 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.493577 4183 kubelet.go:826] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495264 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495561 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495608 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495724 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495888 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495980 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496094 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496285 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/secret" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496379 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496398 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/cephfs" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496535 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496614 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496656 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496880 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/projected" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496980 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.497815 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.500830 4183 server.go:1262] "Started kubelet" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.502655 4183 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.502841 4183 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.500836 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc systemd[1]: Started Kubernetes Kubelet. Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.506975 4183 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.517440 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.518906 4183 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.525606 4183 server.go:461] "Adding debug handlers to kubelet server" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.660549 4183 volume_manager.go:289] "The desired_state_of_world populator starts" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.660966 4183 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.670638 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="200ms" Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.675547 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.675645 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.676413 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.676439 4183 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718166 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718472 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718503 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718520 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718535 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="378552fd-5e53-4882-87ff-95f3d9198861" volumeName="kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718551 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718566 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718582 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718598 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718624 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718642 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718670 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718691 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718713 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718729 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718756 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718823 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718855 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718875 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718988 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719013 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719030 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719048 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719074 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719094 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719113 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719138 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719156 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719243 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719274 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719293 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719332 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719360 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719377 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719410 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0f40333-c860-4c04-8058-a0bf572dcf12" volumeName="kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719437 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="12e733dd-0939-4f1b-9cbb-13897e093787" volumeName="kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719456 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719472 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719488 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719513 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719531 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719545 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719561 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719607 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719624 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a23c0ee-5648-448c-b772-83dced2891ce" volumeName="kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719640 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719670 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719690 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719724 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719743 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719758 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719987 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="378552fd-5e53-4882-87ff-95f3d9198861" volumeName="kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720022 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720039 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720066 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6268b7fe-8910-4505-b404-6f1df638105c" volumeName="kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720083 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720101 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720124 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720150 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720166 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720221 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720241 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720266 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720284 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720304 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720325 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720340 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720357 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720371 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720384 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720396 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720411 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720438 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720451 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720465 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720483 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.726965 4183 reconstruct_new.go:149] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ea5f9a7192af1960ec8c50a86fd2d9a756dbf85695798868f611e04a03ec009/globalmount" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727094 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="af6b67a3-a2bd-4051-9adc-c208a5a65d79" volumeName="kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727112 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727125 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727143 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727157 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727170 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727282 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="87df87f4-ba66-4137-8e41-1fa632ad4207" volumeName="kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727302 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727318 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727331 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727353 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727366 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727379 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727509 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727526 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727582 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727599 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727618 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727635 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727648 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="af6b67a3-a2bd-4051-9adc-c208a5a65d79" volumeName="kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727667 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727680 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727693 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727706 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="378552fd-5e53-4882-87ff-95f3d9198861" volumeName="kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727723 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727741 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727754 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727767 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727839 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727855 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727878 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727890 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727902 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727924 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727936 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727948 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727960 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="af6b67a3-a2bd-4051-9adc-c208a5a65d79" volumeName="kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727977 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727993 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728005 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728016 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728033 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="af6b67a3-a2bd-4051-9adc-c208a5a65d79" volumeName="kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728049 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728062 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728074 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728086 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728502 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728516 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728528 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728546 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728562 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728575 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728596 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728609 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728620 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728631 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728643 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728654 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728665 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728681 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728697 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728708 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728729 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728742 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728754 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728766 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728871 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728892 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728904 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728921 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728935 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728950 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728962 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a48baf-1bee-4921-8bb2-9b7320e76f79" volumeName="kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728973 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728985 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728997 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729010 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729022 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729045 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729058 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729071 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729084 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729565 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729583 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729595 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729607 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729619 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729633 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729644 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729656 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729669 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729686 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="87df87f4-ba66-4137-8e41-1fa632ad4207" volumeName="kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729701 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729714 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729732 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729748 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729761 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729817 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729836 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="87df87f4-ba66-4137-8e41-1fa632ad4207" volumeName="kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729852 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="87df87f4-ba66-4137-8e41-1fa632ad4207" volumeName="kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729870 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729883 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729895 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729909 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729922 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="87df87f4-ba66-4137-8e41-1fa632ad4207" volumeName="kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729934 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729946 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729959 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730684 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730704 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730716 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730733 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730748 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5d722a-1123-4935-9740-52a08d018bc9" volumeName="kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730760 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730994 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731015 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731032 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731056 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731075 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731088 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731103 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731115 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731133 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731150 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731163 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731241 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731260 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731276 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731296 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf1a8966-f594-490a-9fbb-eec5bafd13d3" volumeName="kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731398 4183 reconstruct_new.go:102] "Volume reconstruction finished" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731411 4183 reconciler_new.go:29] "Reconciler: start to sync state" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.760614 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.765043 4183 container_manager_linux.go:884] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.775241 4183 factory.go:55] Registering systemd factory Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.775368 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.775678 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.775770 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.775873 4183 factory.go:221] Registration of the systemd container factory successfully Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.776145 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.779389 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.779986 4183 factory.go:153] Registering CRI-O factory Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.780147 4183 factory.go:221] Registration of the crio container factory successfully Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.780616 4183 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.780912 4183 factory.go:103] Registering Raw factory Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.781217 4183 manager.go:1196] Started watching for new ooms in manager Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.782546 4183 manager.go:319] Starting recovery of all containers Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.836554 4183 manager.go:324] Recovery completed Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.856954 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.858618 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.858719 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.858742 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.878047 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="400ms" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.980529 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.024187 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.024243 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.024678 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.024710 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.026755 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.029064 4183 cpu_manager.go:215] "Starting CPU manager" policy="none" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.029249 4183 cpu_manager.go:216] "Reconciling" reconcilePeriod="10s" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.029599 4183 state_mem.go:36] "Initialized new in-memory state store" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.046027 4183 policy_none.go:49] "None policy: Start" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.048422 4183 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.048995 4183 state_mem.go:35] "Initializing new in-memory state store" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.152712 4183 manager.go:296] "Starting Device Plugin manager" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.153754 4183 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.154469 4183 server.go:79] "Starting device plugin registration server" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.159564 4183 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.160021 4183 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.160109 4183 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.203607 4183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.207046 4183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.207448 4183 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.207823 4183 kubelet.go:2343] "Starting kubelet main sync loop" Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.208236 4183 kubelet.go:2367] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.221281 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.221355 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.280947 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="800ms" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.309413 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.310904 4183 topology_manager.go:215] "Topology Admit Handler" podUID="d3ae206906481b4831fd849b559269c8" podNamespace="openshift-machine-config-operator" podName="kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.312723 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.317346 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.317408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.317428 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.319511 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b2a6a3b2ca08062d24afa4c01aaf9e4f" podNamespace="openshift-etcd" podName="etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.319642 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.323652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.324535 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329208 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329259 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329281 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329319 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329356 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329377 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.330172 4183 topology_manager.go:215] "Topology Admit Handler" podUID="53c1db1508241fbac1bedf9130341ffe" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.330245 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.330639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.330667 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.332452 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.332511 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.332524 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.332629 4183 topology_manager.go:215] "Topology Admit Handler" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" podNamespace="openshift-kube-controller-manager" podName="kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.332661 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.333185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.333258 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.334389 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.334431 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.334444 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335632 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335680 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335705 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335733 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335771 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335860 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.336747 4183 topology_manager.go:215] "Topology Admit Handler" podUID="631cdb37fbb54e809ecc5e719aebd371" podNamespace="openshift-kube-scheduler" podName="openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.336855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.336897 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.337520 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.340045 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.340131 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.340203 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.340406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.340446 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.402370 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.402442 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.402456 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.404278 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.405101 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.405176 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.405191 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.427930 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.429816 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.429869 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.429883 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.429912 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.431407 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.458478 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.458898 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.458984 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459010 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459030 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459062 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459083 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459104 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459122 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459251 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459318 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459384 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459415 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459465 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459494 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.506240 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.537648 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.537744 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561519 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561622 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561688 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561715 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561850 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561890 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561916 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561934 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561955 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562001 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562030 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562053 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562414 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562520 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562569 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562536 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562757 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562826 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562768 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562873 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562900 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562923 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562945 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562969 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562977 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562990 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.563241 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.664890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.688244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.699689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.729881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.738024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.755628 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.755711 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.771301 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53c1db1508241fbac1bedf9130341ffe.slice/crio-e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83 WatchSource:0}: Error finding container e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83: Status 404 returned error can't find the container with id e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83 Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.775105 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3ae206906481b4831fd849b559269c8.slice/crio-410a136ab4d60a86c7b8b3d5f28a28bd1118455ff54525a3bc99a50a4ad5a66b WatchSource:0}: Error finding container 410a136ab4d60a86c7b8b3d5f28a28bd1118455ff54525a3bc99a50a4ad5a66b: Status 404 returned error can't find the container with id 410a136ab4d60a86c7b8b3d5f28a28bd1118455ff54525a3bc99a50a4ad5a66b Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.776442 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2a6a3b2ca08062d24afa4c01aaf9e4f.slice/crio-b55571250f9ecd41f6aecef022adaa7dfc487a62d8b3c48363ff694df16723fc WatchSource:0}: Error finding container b55571250f9ecd41f6aecef022adaa7dfc487a62d8b3c48363ff694df16723fc: Status 404 returned error can't find the container with id b55571250f9ecd41f6aecef022adaa7dfc487a62d8b3c48363ff694df16723fc Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.799304 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.799427 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.800647 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2eb2b200bca0d10cf0fe16fb7c0caf80.slice/crio-f37d107ed757bb5270315ab709945eb5fc67489de969c3be9362d277114d8d29 WatchSource:0}: Error finding container f37d107ed757bb5270315ab709945eb5fc67489de969c3be9362d277114d8d29: Status 404 returned error can't find the container with id f37d107ed757bb5270315ab709945eb5fc67489de969c3be9362d277114d8d29 Aug 13 19:43:56 crc kubenswrapper[4183]: W0813 19:43:56.069422 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:56 crc kubenswrapper[4183]: E0813 19:43:56.069914 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:56 crc kubenswrapper[4183]: E0813 19:43:56.082587 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="1.6s" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.227474 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83"} Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.229358 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"b55571250f9ecd41f6aecef022adaa7dfc487a62d8b3c48363ff694df16723fc"} Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.230869 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"410a136ab4d60a86c7b8b3d5f28a28bd1118455ff54525a3bc99a50a4ad5a66b"} Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.232052 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.234146 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.234221 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.234239 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.234266 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.235577 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"f37d107ed757bb5270315ab709945eb5fc67489de969c3be9362d277114d8d29"} Aug 13 19:43:56 crc kubenswrapper[4183]: E0813 19:43:56.235746 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.237420 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"631cdb37fbb54e809ecc5e719aebd371","Type":"ContainerStarted","Data":"970bf8339a8e8001b60c124abd60c2b2381265f54d5bcdb460515789626b6ba9"} Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.451076 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:43:56 crc kubenswrapper[4183]: E0813 19:43:56.455457 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.508515 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: W0813 19:43:57.317931 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: E0813 19:43:57.318144 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.509595 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: W0813 19:43:57.628935 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: E0813 19:43:57.629006 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: E0813 19:43:57.685165 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="3.2s" Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.836113 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.839094 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.839177 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.839196 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.839229 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:43:57 crc kubenswrapper[4183]: E0813 19:43:57.840852 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.249354 4183 generic.go:334] "Generic (PLEG): container finished" podID="d3ae206906481b4831fd849b559269c8" containerID="e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b" exitCode=0 Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.249430 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerDied","Data":"e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.249608 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.251184 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.251225 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.251241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.266930 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.266977 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.269747 4183 generic.go:334] "Generic (PLEG): container finished" podID="631cdb37fbb54e809ecc5e719aebd371" containerID="d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624" exitCode=0 Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.269973 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"631cdb37fbb54e809ecc5e719aebd371","Type":"ContainerDied","Data":"d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.270197 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.271762 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.271931 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.272147 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.276167 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480" exitCode=0 Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.276318 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.276473 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.287206 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.287241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.287260 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.291941 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.293208 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.293247 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.293259 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.294336 4183 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6" exitCode=0 Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.294394 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.294503 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.313351 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.313410 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.313425 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.505669 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:58 crc kubenswrapper[4183]: W0813 19:43:58.854605 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:58 crc kubenswrapper[4183]: E0813 19:43:58.855205 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:58 crc kubenswrapper[4183]: W0813 19:43:58.867610 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:58 crc kubenswrapper[4183]: E0813 19:43:58.867659 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:59 crc kubenswrapper[4183]: I0813 19:43:59.324418 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93"} Aug 13 19:43:59 crc kubenswrapper[4183]: I0813 19:43:59.507149 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.410433 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5"} Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.466757 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9"} Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.467072 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.471089 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.471277 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.471297 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.486883 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc"} Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.487041 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.492887 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.492975 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.492989 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.505078 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"631cdb37fbb54e809ecc5e719aebd371","Type":"ContainerStarted","Data":"51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52"} Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.505299 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.577033 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:00 crc kubenswrapper[4183]: E0813 19:44:00.590270 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.720716 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:44:00 crc kubenswrapper[4183]: E0813 19:44:00.723203 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:00 crc kubenswrapper[4183]: E0813 19:44:00.887637 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="6.4s" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.041735 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.044357 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.044477 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.044501 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.044544 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:01 crc kubenswrapper[4183]: E0813 19:44:01.046129 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.510569 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.520531 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2"} Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.545127 4183 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0" exitCode=0 Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.545242 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0"} Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.545204 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.547675 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.547827 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.547851 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.558076 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.564287 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.564398 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.565986 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.566209 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.566213 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.566227 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.566240 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.566256 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:01 crc kubenswrapper[4183]: W0813 19:44:01.898722 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:01 crc kubenswrapper[4183]: E0813 19:44:01.898960 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.510177 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.588563 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"631cdb37fbb54e809ecc5e719aebd371","Type":"ContainerStarted","Data":"e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff"} Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.588662 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.601242 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.601332 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.601355 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:02 crc kubenswrapper[4183]: W0813 19:44:02.882299 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:02 crc kubenswrapper[4183]: E0813 19:44:02.882601 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:03 crc kubenswrapper[4183]: W0813 19:44:03.445602 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:03 crc kubenswrapper[4183]: E0813 19:44:03.445714 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.617916 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325"} Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.636725 4183 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73" exitCode=0 Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.637116 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73"} Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.637226 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.641321 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.641454 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.641475 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:04 crc kubenswrapper[4183]: I0813 19:44:04.643619 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"631cdb37fbb54e809ecc5e719aebd371","Type":"ContainerStarted","Data":"7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e"} Aug 13 19:44:04 crc kubenswrapper[4183]: I0813 19:44:04.643721 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:04 crc kubenswrapper[4183]: I0813 19:44:04.645099 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:04 crc kubenswrapper[4183]: I0813 19:44:04.645124 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:04 crc kubenswrapper[4183]: I0813 19:44:04.645135 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:05 crc kubenswrapper[4183]: E0813 19:44:05.404914 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.651064 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd"} Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.660344 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a"} Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.660370 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.660455 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.661600 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.661675 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.661856 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.699489 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"9de5e451cc2d3784d191ca7ee29ddfdd8d4ba15f3a93c605d7c310f6a8f0c5ff"} Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.700288 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.701949 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.702080 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.702100 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.709009 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.709489 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c"} Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.710124 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.710206 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.710226 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.447444 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.449366 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.449427 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.449443 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.449484 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.563401 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.705518 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.705957 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.709252 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.709310 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.709334 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.726474 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15"} Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.726614 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.728519 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.729063 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.729094 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.746001 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.743550 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.743552 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44"} Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.743630 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.744334 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.746270 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.746333 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.746349 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747251 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747304 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747321 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747733 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747831 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747853 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.750507 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.905078 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.008274 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.358473 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.581161 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.746214 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.746245 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.746313 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748257 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748316 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748336 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748365 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748395 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748407 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748257 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748448 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748464 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.748543 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.748652 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.749968 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.750022 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.750232 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.750040 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.750280 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.750296 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:11 crc kubenswrapper[4183]: I0813 19:44:11.169892 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:11 crc kubenswrapper[4183]: I0813 19:44:11.170071 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:11 crc kubenswrapper[4183]: I0813 19:44:11.171882 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:11 crc kubenswrapper[4183]: I0813 19:44:11.171927 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:11 crc kubenswrapper[4183]: I0813 19:44:11.171944 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:12 crc kubenswrapper[4183]: I0813 19:44:12.581168 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:44:12 crc kubenswrapper[4183]: I0813 19:44:12.582219 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:44:13 crc kubenswrapper[4183]: W0813 19:44:13.494495 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.495229 4183 trace.go:236] Trace[777984701]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:44:03.491) (total time: 10003ms): Aug 13 19:44:13 crc kubenswrapper[4183]: Trace[777984701]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (19:44:13.494) Aug 13 19:44:13 crc kubenswrapper[4183]: Trace[777984701]: [10.003254671s] [10.003254671s] END Aug 13 19:44:13 crc kubenswrapper[4183]: E0813 19:44:13.495274 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.510042 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": net/http: TLS handshake timeout Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.524599 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.524771 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.526566 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.526733 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.526958 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:15 crc kubenswrapper[4183]: E0813 19:44:15.406986 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:17 crc kubenswrapper[4183]: E0813 19:44:17.290252 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Aug 13 19:44:17 crc kubenswrapper[4183]: E0813 19:44:17.452281 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Aug 13 19:44:18 crc kubenswrapper[4183]: E0813 19:44:18.909575 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": net/http: TLS handshake timeout Aug 13 19:44:20 crc kubenswrapper[4183]: E0813 19:44:20.593140 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:44:21 crc kubenswrapper[4183]: I0813 19:44:21.170909 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/healthz\": context deadline exceeded" start-of-body= Aug 13 19:44:21 crc kubenswrapper[4183]: I0813 19:44:21.171045 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/healthz\": context deadline exceeded" Aug 13 19:44:22 crc kubenswrapper[4183]: W0813 19:44:22.208232 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.208402 4183 trace.go:236] Trace[505837227]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:44:12.205) (total time: 10002ms): Aug 13 19:44:22 crc kubenswrapper[4183]: Trace[505837227]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (19:44:22.208) Aug 13 19:44:22 crc kubenswrapper[4183]: Trace[505837227]: [10.002428675s] [10.002428675s] END Aug 13 19:44:22 crc kubenswrapper[4183]: E0813 19:44:22.208424 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.427506 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44570->192.168.126.11:17697: read: connection reset by peer" start-of-body= Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.427635 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44570->192.168.126.11:17697: read: connection reset by peer" Aug 13 19:44:22 crc kubenswrapper[4183]: W0813 19:44:22.443211 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: E0813 19:44:22.443301 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.492631 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: W0813 19:44:22.495898 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: E0813 19:44:22.496042 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.530058 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.535586 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.535739 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.581414 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.581995 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.882447 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/0.log" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.885166 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="9de5e451cc2d3784d191ca7ee29ddfdd8d4ba15f3a93c605d7c310f6a8f0c5ff" exitCode=255 Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.885352 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"9de5e451cc2d3784d191ca7ee29ddfdd8d4ba15f3a93c605d7c310f6a8f0c5ff"} Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.885557 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.887150 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.887276 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.887352 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.888737 4183 scope.go:117] "RemoveContainer" containerID="9de5e451cc2d3784d191ca7ee29ddfdd8d4ba15f3a93c605d7c310f6a8f0c5ff" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.573335 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:23Z is after 2025-06-26T12:47:18Z Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.771285 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.772341 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.774293 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.774445 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.774544 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.811249 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.894466 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/0.log" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.903096 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.905032 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.905088 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.905110 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:24 crc kubenswrapper[4183]: E0813 19:44:24.295813 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:24Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.453246 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.455919 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.456074 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.456100 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.456132 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:24 crc kubenswrapper[4183]: E0813 19:44:24.472356 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:24Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.508688 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:24Z is after 2025-06-26T12:47:18Z Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.891416 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.908121 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/0.log" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.910526 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8"} Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.910718 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.911904 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.911957 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.911975 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:25 crc kubenswrapper[4183]: E0813 19:44:25.408285 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.512733 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:25Z is after 2025-06-26T12:47:18Z Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.913000 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.913136 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.916084 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.916152 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.916168 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.185479 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:26 crc kubenswrapper[4183]: W0813 19:44:26.220924 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:26Z is after 2025-06-26T12:47:18Z Aug 13 19:44:26 crc kubenswrapper[4183]: E0813 19:44:26.221145 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:26Z is after 2025-06-26T12:47:18Z Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.508892 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:26Z is after 2025-06-26T12:47:18Z Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.921346 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/1.log" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.923508 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/0.log" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.928912 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" exitCode=255 Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.928964 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8"} Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.929010 4183 scope.go:117] "RemoveContainer" containerID="9de5e451cc2d3784d191ca7ee29ddfdd8d4ba15f3a93c605d7c310f6a8f0c5ff" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.929285 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.932302 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.933985 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.934318 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.940734 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:26 crc kubenswrapper[4183]: E0813 19:44:26.943129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.953158 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.509157 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:27Z is after 2025-06-26T12:47:18Z Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.933897 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/1.log" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.939891 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.941421 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.941681 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.941908 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.943245 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:27 crc kubenswrapper[4183]: E0813 19:44:27.943855 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.507271 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:28Z is after 2025-06-26T12:47:18Z Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.945603 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.947340 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.947415 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.947437 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.949265 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:28 crc kubenswrapper[4183]: E0813 19:44:28.949934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:29 crc kubenswrapper[4183]: I0813 19:44:29.510225 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:29Z is after 2025-06-26T12:47:18Z Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.179631 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:58646->192.168.126.11:10357: read: connection reset by peer" start-of-body= Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.179912 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:58646->192.168.126.11:10357: read: connection reset by peer" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.180009 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.180293 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.184525 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.184711 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.184746 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.189862 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.190889 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c" gracePeriod=30 Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.508175 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:30Z is after 2025-06-26T12:47:18Z Aug 13 19:44:30 crc kubenswrapper[4183]: E0813 19:44:30.598587 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:30Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.957497 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/0.log" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.958419 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c" exitCode=255 Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.958502 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c"} Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.958532 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9"} Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.958833 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.960009 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.960062 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.960085 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:31 crc kubenswrapper[4183]: E0813 19:44:31.300057 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:31Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.474098 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.475689 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.475940 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.475967 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.476003 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:31 crc kubenswrapper[4183]: E0813 19:44:31.479716 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:31Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.508445 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:31Z is after 2025-06-26T12:47:18Z Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.559283 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.962125 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.963607 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.963676 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.963699 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:32 crc kubenswrapper[4183]: I0813 19:44:32.508713 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:32Z is after 2025-06-26T12:47:18Z Aug 13 19:44:33 crc kubenswrapper[4183]: I0813 19:44:33.507968 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:33Z is after 2025-06-26T12:47:18Z Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.509459 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:34Z is after 2025-06-26T12:47:18Z Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.891356 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.891730 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.893298 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.893389 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.893407 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.894609 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:34 crc kubenswrapper[4183]: E0813 19:44:34.895045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.956972 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:44:34 crc kubenswrapper[4183]: E0813 19:44:34.965734 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:34Z is after 2025-06-26T12:47:18Z Aug 13 19:44:34 crc kubenswrapper[4183]: E0813 19:44:34.965983 4183 certificate_manager.go:440] kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition Aug 13 19:44:35 crc kubenswrapper[4183]: E0813 19:44:35.409388 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:35 crc kubenswrapper[4183]: I0813 19:44:35.507686 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:35Z is after 2025-06-26T12:47:18Z Aug 13 19:44:36 crc kubenswrapper[4183]: I0813 19:44:36.509197 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:36Z is after 2025-06-26T12:47:18Z Aug 13 19:44:36 crc kubenswrapper[4183]: W0813 19:44:36.583957 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:36Z is after 2025-06-26T12:47:18Z Aug 13 19:44:36 crc kubenswrapper[4183]: E0813 19:44:36.584065 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:36Z is after 2025-06-26T12:47:18Z Aug 13 19:44:37 crc kubenswrapper[4183]: I0813 19:44:37.507683 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:37Z is after 2025-06-26T12:47:18Z Aug 13 19:44:38 crc kubenswrapper[4183]: E0813 19:44:38.304970 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:38Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.480243 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.482006 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.482036 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.482051 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.482077 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:38 crc kubenswrapper[4183]: E0813 19:44:38.486195 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:38Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.507744 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:38Z is after 2025-06-26T12:47:18Z Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.508194 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:39Z is after 2025-06-26T12:47:18Z Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.580897 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.581127 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.582389 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.582456 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.582473 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:40 crc kubenswrapper[4183]: I0813 19:44:40.507720 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:40Z is after 2025-06-26T12:47:18Z Aug 13 19:44:40 crc kubenswrapper[4183]: E0813 19:44:40.603676 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:40Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:44:41 crc kubenswrapper[4183]: I0813 19:44:41.507445 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:41Z is after 2025-06-26T12:47:18Z Aug 13 19:44:42 crc kubenswrapper[4183]: I0813 19:44:42.507559 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:42Z is after 2025-06-26T12:47:18Z Aug 13 19:44:42 crc kubenswrapper[4183]: W0813 19:44:42.522365 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:42Z is after 2025-06-26T12:47:18Z Aug 13 19:44:42 crc kubenswrapper[4183]: E0813 19:44:42.522440 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:42Z is after 2025-06-26T12:47:18Z Aug 13 19:44:42 crc kubenswrapper[4183]: I0813 19:44:42.581872 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:44:42 crc kubenswrapper[4183]: I0813 19:44:42.582387 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:44:43 crc kubenswrapper[4183]: I0813 19:44:43.508421 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:43Z is after 2025-06-26T12:47:18Z Aug 13 19:44:44 crc kubenswrapper[4183]: I0813 19:44:44.507425 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:44Z is after 2025-06-26T12:47:18Z Aug 13 19:44:45 crc kubenswrapper[4183]: W0813 19:44:45.280999 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:45Z is after 2025-06-26T12:47:18Z Aug 13 19:44:45 crc kubenswrapper[4183]: E0813 19:44:45.281599 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:45Z is after 2025-06-26T12:47:18Z Aug 13 19:44:45 crc kubenswrapper[4183]: E0813 19:44:45.309494 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:45Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:45 crc kubenswrapper[4183]: E0813 19:44:45.410132 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.486592 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.489724 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.490565 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.490649 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.490692 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:45 crc kubenswrapper[4183]: E0813 19:44:45.496415 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:45Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.508552 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:45Z is after 2025-06-26T12:47:18Z Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.352404 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.353013 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.354512 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.354573 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.354587 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.507711 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:46Z is after 2025-06-26T12:47:18Z Aug 13 19:44:47 crc kubenswrapper[4183]: W0813 19:44:47.185997 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:47Z is after 2025-06-26T12:47:18Z Aug 13 19:44:47 crc kubenswrapper[4183]: E0813 19:44:47.186303 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:47Z is after 2025-06-26T12:47:18Z Aug 13 19:44:47 crc kubenswrapper[4183]: I0813 19:44:47.508005 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:47Z is after 2025-06-26T12:47:18Z Aug 13 19:44:48 crc kubenswrapper[4183]: I0813 19:44:48.530896 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:48Z is after 2025-06-26T12:47:18Z Aug 13 19:44:49 crc kubenswrapper[4183]: I0813 19:44:49.508142 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:49Z is after 2025-06-26T12:47:18Z Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.208245 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.209677 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.209728 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.209743 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.211129 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.508572 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:50Z is after 2025-06-26T12:47:18Z Aug 13 19:44:50 crc kubenswrapper[4183]: E0813 19:44:50.611066 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:50Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.030401 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/1.log" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.045562 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98"} Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.046059 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.048093 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.048183 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.048203 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.510559 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:51Z is after 2025-06-26T12:47:18Z Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.054591 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/2.log" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.055848 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/1.log" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.064063 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" exitCode=255 Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.064165 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98"} Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.064305 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.064881 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.067302 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.067486 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.067529 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.070693 4183 scope.go:117] "RemoveContainer" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" Aug 13 19:44:52 crc kubenswrapper[4183]: E0813 19:44:52.072699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:52 crc kubenswrapper[4183]: E0813 19:44:52.319223 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:52Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.496694 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.498405 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.498720 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.498978 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.499107 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:52 crc kubenswrapper[4183]: E0813 19:44:52.504188 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:52Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.507577 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:52Z is after 2025-06-26T12:47:18Z Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.581562 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.581752 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:44:53 crc kubenswrapper[4183]: I0813 19:44:53.070983 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/2.log" Aug 13 19:44:53 crc kubenswrapper[4183]: I0813 19:44:53.508312 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:53Z is after 2025-06-26T12:47:18Z Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.508279 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:54Z is after 2025-06-26T12:47:18Z Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.657538 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.657691 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.657720 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.657741 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.657755 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.891466 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.892106 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.893700 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.894037 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.894089 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.895662 4183 scope.go:117] "RemoveContainer" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" Aug 13 19:44:54 crc kubenswrapper[4183]: E0813 19:44:54.896216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:55 crc kubenswrapper[4183]: E0813 19:44:55.410662 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:55 crc kubenswrapper[4183]: I0813 19:44:55.507525 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:55Z is after 2025-06-26T12:47:18Z Aug 13 19:44:56 crc kubenswrapper[4183]: I0813 19:44:56.508760 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:56Z is after 2025-06-26T12:47:18Z Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.507157 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:57Z is after 2025-06-26T12:47:18Z Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.563091 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.563345 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.565501 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.565852 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.566000 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.571517 4183 scope.go:117] "RemoveContainer" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" Aug 13 19:44:57 crc kubenswrapper[4183]: E0813 19:44:57.572262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:58 crc kubenswrapper[4183]: I0813 19:44:58.507190 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:58Z is after 2025-06-26T12:47:18Z Aug 13 19:44:59 crc kubenswrapper[4183]: E0813 19:44:59.326432 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:59Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.504460 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.506489 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.506660 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.506694 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.506737 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.509406 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:59Z is after 2025-06-26T12:47:18Z Aug 13 19:44:59 crc kubenswrapper[4183]: E0813 19:44:59.512950 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:59Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.507961 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:00Z is after 2025-06-26T12:47:18Z Aug 13 19:45:00 crc kubenswrapper[4183]: E0813 19:45:00.615941 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:00Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.995163 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:59688->192.168.126.11:10357: read: connection reset by peer" start-of-body= Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.995291 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:59688->192.168.126.11:10357: read: connection reset by peer" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.995354 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.995730 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.997332 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.997373 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.997385 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.002082 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.003082 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9" gracePeriod=30 Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.100706 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/1.log" Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.102983 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/0.log" Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.106342 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9" exitCode=255 Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.106406 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9"} Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.106447 4183 scope.go:117] "RemoveContainer" containerID="7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c" Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.508464 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:01Z is after 2025-06-26T12:47:18Z Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.111742 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/1.log" Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.113541 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6"} Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.113650 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.114682 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.114738 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.114754 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.509447 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:02Z is after 2025-06-26T12:47:18Z Aug 13 19:45:03 crc kubenswrapper[4183]: I0813 19:45:03.116281 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:03 crc kubenswrapper[4183]: I0813 19:45:03.117326 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:03 crc kubenswrapper[4183]: I0813 19:45:03.117378 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:03 crc kubenswrapper[4183]: I0813 19:45:03.117394 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:03 crc kubenswrapper[4183]: I0813 19:45:03.508066 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:03Z is after 2025-06-26T12:47:18Z Aug 13 19:45:04 crc kubenswrapper[4183]: I0813 19:45:04.509005 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:04Z is after 2025-06-26T12:47:18Z Aug 13 19:45:05 crc kubenswrapper[4183]: E0813 19:45:05.410927 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:05 crc kubenswrapper[4183]: I0813 19:45:05.509997 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:05Z is after 2025-06-26T12:47:18Z Aug 13 19:45:06 crc kubenswrapper[4183]: E0813 19:45:06.332956 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:06Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.507894 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:06Z is after 2025-06-26T12:47:18Z Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.514149 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.516311 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.516383 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.516400 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.516437 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:06 crc kubenswrapper[4183]: E0813 19:45:06.520556 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:06Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.969439 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:45:06 crc kubenswrapper[4183]: E0813 19:45:06.974382 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:06Z is after 2025-06-26T12:47:18Z Aug 13 19:45:07 crc kubenswrapper[4183]: I0813 19:45:07.507969 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:07Z is after 2025-06-26T12:47:18Z Aug 13 19:45:08 crc kubenswrapper[4183]: I0813 19:45:08.508286 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:08Z is after 2025-06-26T12:47:18Z Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.507931 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:09Z is after 2025-06-26T12:47:18Z Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.581036 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.581296 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.582869 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.582950 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.582974 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:10 crc kubenswrapper[4183]: I0813 19:45:10.508251 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:10Z is after 2025-06-26T12:47:18Z Aug 13 19:45:10 crc kubenswrapper[4183]: E0813 19:45:10.621077 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:10Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.507141 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:11Z is after 2025-06-26T12:47:18Z Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.558506 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.558664 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.560311 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.560465 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.560495 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.209239 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.211048 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.211092 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.211104 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.212843 4183 scope.go:117] "RemoveContainer" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" Aug 13 19:45:12 crc kubenswrapper[4183]: W0813 19:45:12.375543 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:12Z is after 2025-06-26T12:47:18Z Aug 13 19:45:12 crc kubenswrapper[4183]: E0813 19:45:12.375667 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:12Z is after 2025-06-26T12:47:18Z Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.508906 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:12Z is after 2025-06-26T12:47:18Z Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.582036 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.582203 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.152957 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/2.log" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.156207 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53"} Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.156392 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.157541 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.157717 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.157924 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:13 crc kubenswrapper[4183]: E0813 19:45:13.337071 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:13Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.508426 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:13Z is after 2025-06-26T12:47:18Z Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.520646 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.522157 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.522456 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.522529 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.522603 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:13 crc kubenswrapper[4183]: E0813 19:45:13.528513 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:13Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.161681 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/3.log" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.162518 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/2.log" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.166966 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" exitCode=255 Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.167054 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53"} Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.167107 4183 scope.go:117] "RemoveContainer" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.167229 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.168632 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.168746 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.168849 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.170929 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:14 crc kubenswrapper[4183]: E0813 19:45:14.171697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.208869 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.210386 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.210540 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.210648 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.507841 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:14Z is after 2025-06-26T12:47:18Z Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.891288 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.171833 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/3.log" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.174120 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.175018 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.175060 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.175073 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.176106 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:15 crc kubenswrapper[4183]: E0813 19:45:15.176437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:15 crc kubenswrapper[4183]: E0813 19:45:15.411865 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.507316 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:15Z is after 2025-06-26T12:47:18Z Aug 13 19:45:16 crc kubenswrapper[4183]: I0813 19:45:16.509268 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:16Z is after 2025-06-26T12:47:18Z Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.509667 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:17Z is after 2025-06-26T12:47:18Z Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.563182 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.563484 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.565073 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.565125 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.565145 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.566391 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:17 crc kubenswrapper[4183]: E0813 19:45:17.566892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:18 crc kubenswrapper[4183]: I0813 19:45:18.508241 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:18Z is after 2025-06-26T12:47:18Z Aug 13 19:45:19 crc kubenswrapper[4183]: I0813 19:45:19.511330 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:19Z is after 2025-06-26T12:47:18Z Aug 13 19:45:20 crc kubenswrapper[4183]: E0813 19:45:20.341923 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:20Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.508349 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:20Z is after 2025-06-26T12:47:18Z Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.528918 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.530400 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.530507 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.530524 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.530625 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:20 crc kubenswrapper[4183]: E0813 19:45:20.534200 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:20Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:20 crc kubenswrapper[4183]: E0813 19:45:20.627698 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:20Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:21 crc kubenswrapper[4183]: I0813 19:45:21.508311 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:21Z is after 2025-06-26T12:47:18Z Aug 13 19:45:22 crc kubenswrapper[4183]: W0813 19:45:22.431240 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:22Z is after 2025-06-26T12:47:18Z Aug 13 19:45:22 crc kubenswrapper[4183]: E0813 19:45:22.431305 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:22Z is after 2025-06-26T12:47:18Z Aug 13 19:45:22 crc kubenswrapper[4183]: I0813 19:45:22.507124 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:22Z is after 2025-06-26T12:47:18Z Aug 13 19:45:22 crc kubenswrapper[4183]: I0813 19:45:22.580405 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:45:22 crc kubenswrapper[4183]: I0813 19:45:22.580763 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:45:23 crc kubenswrapper[4183]: I0813 19:45:23.507832 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:23Z is after 2025-06-26T12:47:18Z Aug 13 19:45:24 crc kubenswrapper[4183]: I0813 19:45:24.509082 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:24Z is after 2025-06-26T12:47:18Z Aug 13 19:45:25 crc kubenswrapper[4183]: E0813 19:45:25.412585 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:25 crc kubenswrapper[4183]: I0813 19:45:25.508881 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:25Z is after 2025-06-26T12:47:18Z Aug 13 19:45:26 crc kubenswrapper[4183]: I0813 19:45:26.507470 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:26Z is after 2025-06-26T12:47:18Z Aug 13 19:45:27 crc kubenswrapper[4183]: E0813 19:45:27.346884 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:27Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.510549 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:27Z is after 2025-06-26T12:47:18Z Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.534700 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.540097 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.540188 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.540208 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.540270 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:27 crc kubenswrapper[4183]: E0813 19:45:27.544948 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:27Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:28 crc kubenswrapper[4183]: I0813 19:45:28.507944 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:28Z is after 2025-06-26T12:47:18Z Aug 13 19:45:29 crc kubenswrapper[4183]: W0813 19:45:29.332190 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:29Z is after 2025-06-26T12:47:18Z Aug 13 19:45:29 crc kubenswrapper[4183]: E0813 19:45:29.332305 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:29Z is after 2025-06-26T12:47:18Z Aug 13 19:45:29 crc kubenswrapper[4183]: I0813 19:45:29.508640 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:29Z is after 2025-06-26T12:47:18Z Aug 13 19:45:30 crc kubenswrapper[4183]: I0813 19:45:30.507496 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:30Z is after 2025-06-26T12:47:18Z Aug 13 19:45:30 crc kubenswrapper[4183]: E0813 19:45:30.632844 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:30Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.209282 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.211543 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.211643 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.211664 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.214026 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:31 crc kubenswrapper[4183]: E0813 19:45:31.215310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.508192 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:31Z is after 2025-06-26T12:47:18Z Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.769405 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:50512->192.168.126.11:10357: read: connection reset by peer" start-of-body= Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.769522 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:50512->192.168.126.11:10357: read: connection reset by peer" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.769608 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.769813 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.771861 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.771993 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.772154 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.774314 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.774876 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6" gracePeriod=30 Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.248265 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/2.log" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.248965 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/1.log" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.250470 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6" exitCode=255 Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.250514 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6"} Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.250595 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f"} Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.250676 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.250666 4183 scope.go:117] "RemoveContainer" containerID="0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.251767 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.251898 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.251922 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.507279 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:32Z is after 2025-06-26T12:47:18Z Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.259638 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/2.log" Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.262592 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.264018 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.264120 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.264143 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.508014 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:33Z is after 2025-06-26T12:47:18Z Aug 13 19:45:33 crc kubenswrapper[4183]: W0813 19:45:33.705946 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:33Z is after 2025-06-26T12:47:18Z Aug 13 19:45:33 crc kubenswrapper[4183]: E0813 19:45:33.706061 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:33Z is after 2025-06-26T12:47:18Z Aug 13 19:45:34 crc kubenswrapper[4183]: E0813 19:45:34.352501 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:34Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.508937 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:34Z is after 2025-06-26T12:47:18Z Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.545880 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.548101 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.548169 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.548187 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.548219 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:34 crc kubenswrapper[4183]: E0813 19:45:34.552614 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:34Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:35 crc kubenswrapper[4183]: E0813 19:45:35.413709 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:35 crc kubenswrapper[4183]: I0813 19:45:35.507972 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:35Z is after 2025-06-26T12:47:18Z Aug 13 19:45:36 crc kubenswrapper[4183]: I0813 19:45:36.507944 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:36Z is after 2025-06-26T12:47:18Z Aug 13 19:45:37 crc kubenswrapper[4183]: I0813 19:45:37.508249 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:37Z is after 2025-06-26T12:47:18Z Aug 13 19:45:38 crc kubenswrapper[4183]: I0813 19:45:38.508206 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:38Z is after 2025-06-26T12:47:18Z Aug 13 19:45:38 crc kubenswrapper[4183]: I0813 19:45:38.969995 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:45:38 crc kubenswrapper[4183]: E0813 19:45:38.976170 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:38Z is after 2025-06-26T12:47:18Z Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.508669 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:39Z is after 2025-06-26T12:47:18Z Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.581199 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.581513 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.585195 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.585255 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.585274 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:40 crc kubenswrapper[4183]: I0813 19:45:40.507390 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:40Z is after 2025-06-26T12:47:18Z Aug 13 19:45:40 crc kubenswrapper[4183]: E0813 19:45:40.639384 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:40Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:41 crc kubenswrapper[4183]: E0813 19:45:41.357739 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:41Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.508453 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:41Z is after 2025-06-26T12:47:18Z Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.554204 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.556627 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.556974 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.557203 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.557428 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.558194 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.558607 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.559625 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.559680 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.559694 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:41 crc kubenswrapper[4183]: E0813 19:45:41.562659 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:41Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:42 crc kubenswrapper[4183]: I0813 19:45:42.508395 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:42Z is after 2025-06-26T12:47:18Z Aug 13 19:45:42 crc kubenswrapper[4183]: I0813 19:45:42.582078 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" start-of-body= Aug 13 19:45:42 crc kubenswrapper[4183]: I0813 19:45:42.582490 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.208292 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.209891 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.209995 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.210016 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.211226 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:43 crc kubenswrapper[4183]: E0813 19:45:43.211633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.510590 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:43Z is after 2025-06-26T12:47:18Z Aug 13 19:45:44 crc kubenswrapper[4183]: I0813 19:45:44.209354 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:44 crc kubenswrapper[4183]: I0813 19:45:44.213562 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:44 crc kubenswrapper[4183]: I0813 19:45:44.213650 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:44 crc kubenswrapper[4183]: I0813 19:45:44.213670 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:44 crc kubenswrapper[4183]: I0813 19:45:44.508431 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:44Z is after 2025-06-26T12:47:18Z Aug 13 19:45:45 crc kubenswrapper[4183]: E0813 19:45:45.414942 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:45 crc kubenswrapper[4183]: I0813 19:45:45.508706 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:45Z is after 2025-06-26T12:47:18Z Aug 13 19:45:46 crc kubenswrapper[4183]: I0813 19:45:46.507259 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:46Z is after 2025-06-26T12:47:18Z Aug 13 19:45:47 crc kubenswrapper[4183]: I0813 19:45:47.509695 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:47Z is after 2025-06-26T12:47:18Z Aug 13 19:45:48 crc kubenswrapper[4183]: E0813 19:45:48.363856 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:48Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.508271 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:48Z is after 2025-06-26T12:47:18Z Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.564016 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.567428 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.567522 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.567574 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.567632 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:48 crc kubenswrapper[4183]: E0813 19:45:48.572082 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:48Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:49 crc kubenswrapper[4183]: I0813 19:45:49.208719 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:49 crc kubenswrapper[4183]: I0813 19:45:49.210354 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:49 crc kubenswrapper[4183]: I0813 19:45:49.210508 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:49 crc kubenswrapper[4183]: I0813 19:45:49.210740 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:49 crc kubenswrapper[4183]: I0813 19:45:49.508264 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:49Z is after 2025-06-26T12:47:18Z Aug 13 19:45:50 crc kubenswrapper[4183]: I0813 19:45:50.508065 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:50Z is after 2025-06-26T12:47:18Z Aug 13 19:45:50 crc kubenswrapper[4183]: E0813 19:45:50.643361 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:50Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:50 crc kubenswrapper[4183]: E0813 19:45:50.643457 4183 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:50 crc kubenswrapper[4183]: E0813 19:45:50.647449 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:50Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:51 crc kubenswrapper[4183]: I0813 19:45:51.509519 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:51Z is after 2025-06-26T12:47:18Z Aug 13 19:45:51 crc kubenswrapper[4183]: E0813 19:45:51.794485 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:51Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:52 crc kubenswrapper[4183]: I0813 19:45:52.509904 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:52Z is after 2025-06-26T12:47:18Z Aug 13 19:45:52 crc kubenswrapper[4183]: I0813 19:45:52.581821 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:45:52 crc kubenswrapper[4183]: I0813 19:45:52.582173 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:45:53 crc kubenswrapper[4183]: I0813 19:45:53.509729 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:53Z is after 2025-06-26T12:47:18Z Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.508647 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:54Z is after 2025-06-26T12:47:18Z Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.659167 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.659309 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.659344 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.659370 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.659420 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:45:55 crc kubenswrapper[4183]: E0813 19:45:55.368548 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:55Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:55 crc kubenswrapper[4183]: E0813 19:45:55.416050 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.507485 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:55Z is after 2025-06-26T12:47:18Z Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.574137 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.576280 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.576618 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.576670 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.576708 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:55 crc kubenswrapper[4183]: E0813 19:45:55.580415 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:55Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.209118 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.210732 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.210840 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.210859 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.212460 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.510678 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:56Z is after 2025-06-26T12:47:18Z Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.354644 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/3.log" Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.357558 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed"} Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.357718 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.358960 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.359026 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.359043 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.508048 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:57Z is after 2025-06-26T12:47:18Z Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.563936 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.363040 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/4.log" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.364913 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/3.log" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.367439 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" exitCode=255 Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.367539 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed"} Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.367572 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.367603 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.369304 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.369404 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.369630 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.371325 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:45:58 crc kubenswrapper[4183]: E0813 19:45:58.371984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.508439 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:58Z is after 2025-06-26T12:47:18Z Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.376302 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/4.log" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.384107 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.386032 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.386120 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.386155 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.388711 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:45:59 crc kubenswrapper[4183]: E0813 19:45:59.389651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.507063 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:59Z is after 2025-06-26T12:47:18Z Aug 13 19:46:00 crc kubenswrapper[4183]: I0813 19:46:00.517885 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:00Z is after 2025-06-26T12:47:18Z Aug 13 19:46:01 crc kubenswrapper[4183]: W0813 19:46:01.348988 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:01Z is after 2025-06-26T12:47:18Z Aug 13 19:46:01 crc kubenswrapper[4183]: E0813 19:46:01.349134 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:01Z is after 2025-06-26T12:47:18Z Aug 13 19:46:01 crc kubenswrapper[4183]: I0813 19:46:01.507847 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:01Z is after 2025-06-26T12:47:18Z Aug 13 19:46:01 crc kubenswrapper[4183]: E0813 19:46:01.804456 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:01Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:02 crc kubenswrapper[4183]: E0813 19:46:02.375954 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:02Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.511228 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:02Z is after 2025-06-26T12:47:18Z Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.571763 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:36156->192.168.126.11:10357: read: connection reset by peer" start-of-body= Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.571983 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:36156->192.168.126.11:10357: read: connection reset by peer" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.572064 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.572264 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.574337 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.574366 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.574378 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.576042 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.576385 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f" gracePeriod=30 Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.581620 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.584487 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.584708 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.584733 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.584834 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:02 crc kubenswrapper[4183]: E0813 19:46:02.595868 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:02Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.399607 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/3.log" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.400721 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/2.log" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.402969 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f" exitCode=255 Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.403024 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f"} Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.403062 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a"} Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.403091 4183 scope.go:117] "RemoveContainer" containerID="dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.403245 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.404463 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.404582 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.404599 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.507733 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:03Z is after 2025-06-26T12:47:18Z Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.413221 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/3.log" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.509144 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:04Z is after 2025-06-26T12:47:18Z Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.892034 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.892472 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.894998 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.895184 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.895294 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.896912 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:46:04 crc kubenswrapper[4183]: E0813 19:46:04.897399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:46:05 crc kubenswrapper[4183]: E0813 19:46:05.416222 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:05 crc kubenswrapper[4183]: W0813 19:46:05.449941 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:05Z is after 2025-06-26T12:47:18Z Aug 13 19:46:05 crc kubenswrapper[4183]: E0813 19:46:05.450097 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:05Z is after 2025-06-26T12:47:18Z Aug 13 19:46:05 crc kubenswrapper[4183]: I0813 19:46:05.508913 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:05Z is after 2025-06-26T12:47:18Z Aug 13 19:46:06 crc kubenswrapper[4183]: I0813 19:46:06.510697 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:06Z is after 2025-06-26T12:47:18Z Aug 13 19:46:07 crc kubenswrapper[4183]: I0813 19:46:07.508141 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:07Z is after 2025-06-26T12:47:18Z Aug 13 19:46:08 crc kubenswrapper[4183]: I0813 19:46:08.509106 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:08Z is after 2025-06-26T12:47:18Z Aug 13 19:46:09 crc kubenswrapper[4183]: E0813 19:46:09.380169 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:09Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.508176 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:09Z is after 2025-06-26T12:47:18Z Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.580950 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.581183 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.584743 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.585010 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.585109 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.596742 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.598652 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.598702 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.598718 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.598745 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:09 crc kubenswrapper[4183]: E0813 19:46:09.605621 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:09Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:10 crc kubenswrapper[4183]: I0813 19:46:10.509770 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:10Z is after 2025-06-26T12:47:18Z Aug 13 19:46:10 crc kubenswrapper[4183]: I0813 19:46:10.969747 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:46:10 crc kubenswrapper[4183]: E0813 19:46:10.975379 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:10Z is after 2025-06-26T12:47:18Z Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.511689 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:11Z is after 2025-06-26T12:47:18Z Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.559714 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.561022 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.564169 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.564287 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.564307 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:11 crc kubenswrapper[4183]: E0813 19:46:11.816090 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:11Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:12 crc kubenswrapper[4183]: I0813 19:46:12.509294 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:12Z is after 2025-06-26T12:47:18Z Aug 13 19:46:12 crc kubenswrapper[4183]: I0813 19:46:12.581260 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:46:12 crc kubenswrapper[4183]: I0813 19:46:12.581482 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:46:13 crc kubenswrapper[4183]: I0813 19:46:13.519035 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:13Z is after 2025-06-26T12:47:18Z Aug 13 19:46:14 crc kubenswrapper[4183]: I0813 19:46:14.509354 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:14Z is after 2025-06-26T12:47:18Z Aug 13 19:46:15 crc kubenswrapper[4183]: E0813 19:46:15.416692 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:15 crc kubenswrapper[4183]: I0813 19:46:15.508135 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:15Z is after 2025-06-26T12:47:18Z Aug 13 19:46:16 crc kubenswrapper[4183]: E0813 19:46:16.385964 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:16Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.507766 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:16Z is after 2025-06-26T12:47:18Z Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.606104 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.607732 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.607889 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.607912 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.607953 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:16 crc kubenswrapper[4183]: E0813 19:46:16.612289 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:16Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:17 crc kubenswrapper[4183]: I0813 19:46:17.507760 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:17Z is after 2025-06-26T12:47:18Z Aug 13 19:46:18 crc kubenswrapper[4183]: I0813 19:46:18.509153 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:18Z is after 2025-06-26T12:47:18Z Aug 13 19:46:18 crc kubenswrapper[4183]: W0813 19:46:18.734308 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:18Z is after 2025-06-26T12:47:18Z Aug 13 19:46:18 crc kubenswrapper[4183]: E0813 19:46:18.734454 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:18Z is after 2025-06-26T12:47:18Z Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.209340 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.211018 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.211174 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.211190 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.212634 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:46:19 crc kubenswrapper[4183]: E0813 19:46:19.213052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.513958 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:19Z is after 2025-06-26T12:47:18Z Aug 13 19:46:20 crc kubenswrapper[4183]: I0813 19:46:20.508721 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:20Z is after 2025-06-26T12:47:18Z Aug 13 19:46:21 crc kubenswrapper[4183]: I0813 19:46:21.509911 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:21Z is after 2025-06-26T12:47:18Z Aug 13 19:46:21 crc kubenswrapper[4183]: E0813 19:46:21.820321 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:21Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:22 crc kubenswrapper[4183]: I0813 19:46:22.508481 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:22Z is after 2025-06-26T12:47:18Z Aug 13 19:46:22 crc kubenswrapper[4183]: I0813 19:46:22.580330 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:46:22 crc kubenswrapper[4183]: I0813 19:46:22.580470 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:46:23 crc kubenswrapper[4183]: E0813 19:46:23.390894 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:23Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.508225 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:23Z is after 2025-06-26T12:47:18Z Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.613426 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.615406 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.615477 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.615582 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.615626 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:23 crc kubenswrapper[4183]: E0813 19:46:23.619335 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:23Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:24 crc kubenswrapper[4183]: I0813 19:46:24.508866 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:24Z is after 2025-06-26T12:47:18Z Aug 13 19:46:25 crc kubenswrapper[4183]: E0813 19:46:25.417160 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:25 crc kubenswrapper[4183]: I0813 19:46:25.508965 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:25Z is after 2025-06-26T12:47:18Z Aug 13 19:46:26 crc kubenswrapper[4183]: W0813 19:46:26.192309 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:26Z is after 2025-06-26T12:47:18Z Aug 13 19:46:26 crc kubenswrapper[4183]: E0813 19:46:26.192390 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:26Z is after 2025-06-26T12:47:18Z Aug 13 19:46:26 crc kubenswrapper[4183]: I0813 19:46:26.508890 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:26Z is after 2025-06-26T12:47:18Z Aug 13 19:46:27 crc kubenswrapper[4183]: I0813 19:46:27.508416 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:27Z is after 2025-06-26T12:47:18Z Aug 13 19:46:28 crc kubenswrapper[4183]: I0813 19:46:28.509326 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:28Z is after 2025-06-26T12:47:18Z Aug 13 19:46:29 crc kubenswrapper[4183]: I0813 19:46:29.507732 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:29Z is after 2025-06-26T12:47:18Z Aug 13 19:46:30 crc kubenswrapper[4183]: E0813 19:46:30.396465 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:30Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.509171 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:30Z is after 2025-06-26T12:47:18Z Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.619914 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.622010 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.622079 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.622098 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.622127 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:30 crc kubenswrapper[4183]: E0813 19:46:30.626393 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:30Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:31 crc kubenswrapper[4183]: I0813 19:46:31.507850 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:31Z is after 2025-06-26T12:47:18Z Aug 13 19:46:31 crc kubenswrapper[4183]: E0813 19:46:31.824915 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:31Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.209187 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.210499 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.210595 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.210615 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.212945 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:46:32 crc kubenswrapper[4183]: E0813 19:46:32.213376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.510109 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:32Z is after 2025-06-26T12:47:18Z Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.581386 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.581503 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.581546 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.581805 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.583846 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.583900 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.583916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.585397 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.585847 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" gracePeriod=30 Aug 13 19:46:32 crc kubenswrapper[4183]: E0813 19:46:32.750551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.508606 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:33Z is after 2025-06-26T12:47:18Z Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.531882 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/4.log" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.533863 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/3.log" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.536919 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" exitCode=255 Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.537005 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a"} Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.537060 4183 scope.go:117] "RemoveContainer" containerID="4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.537440 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.539432 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.539516 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.539540 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.542224 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:46:33 crc kubenswrapper[4183]: E0813 19:46:33.543207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:46:34 crc kubenswrapper[4183]: I0813 19:46:34.511695 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:34Z is after 2025-06-26T12:47:18Z Aug 13 19:46:34 crc kubenswrapper[4183]: I0813 19:46:34.542528 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/4.log" Aug 13 19:46:35 crc kubenswrapper[4183]: E0813 19:46:35.417415 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:35 crc kubenswrapper[4183]: I0813 19:46:35.508819 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:35Z is after 2025-06-26T12:47:18Z Aug 13 19:46:36 crc kubenswrapper[4183]: I0813 19:46:36.208887 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:36 crc kubenswrapper[4183]: I0813 19:46:36.210507 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:36 crc kubenswrapper[4183]: I0813 19:46:36.210562 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:36 crc kubenswrapper[4183]: I0813 19:46:36.210609 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:36 crc kubenswrapper[4183]: I0813 19:46:36.508479 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:36Z is after 2025-06-26T12:47:18Z Aug 13 19:46:37 crc kubenswrapper[4183]: E0813 19:46:37.401966 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:37Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.509111 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:37Z is after 2025-06-26T12:47:18Z Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.627700 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.630333 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.630409 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.630433 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.630466 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:37 crc kubenswrapper[4183]: E0813 19:46:37.634557 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:37Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:38 crc kubenswrapper[4183]: I0813 19:46:38.508190 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:38Z is after 2025-06-26T12:47:18Z Aug 13 19:46:39 crc kubenswrapper[4183]: I0813 19:46:39.507942 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:39Z is after 2025-06-26T12:47:18Z Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.508066 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:40Z is after 2025-06-26T12:47:18Z Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.519061 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.519281 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.521387 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.521474 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.521498 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.523226 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:46:40 crc kubenswrapper[4183]: E0813 19:46:40.524113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:46:41 crc kubenswrapper[4183]: I0813 19:46:41.507265 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:41Z is after 2025-06-26T12:47:18Z Aug 13 19:46:41 crc kubenswrapper[4183]: E0813 19:46:41.829421 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:41Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:42 crc kubenswrapper[4183]: I0813 19:46:42.508908 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:42Z is after 2025-06-26T12:47:18Z Aug 13 19:46:42 crc kubenswrapper[4183]: I0813 19:46:42.969557 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:46:42 crc kubenswrapper[4183]: E0813 19:46:42.974395 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:42Z is after 2025-06-26T12:47:18Z Aug 13 19:46:43 crc kubenswrapper[4183]: I0813 19:46:43.507078 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:43Z is after 2025-06-26T12:47:18Z Aug 13 19:46:44 crc kubenswrapper[4183]: E0813 19:46:44.408387 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:44Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.508719 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:44Z is after 2025-06-26T12:47:18Z Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.634877 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.636828 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.636871 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.636883 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.636915 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:44 crc kubenswrapper[4183]: E0813 19:46:44.640455 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:44Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:45 crc kubenswrapper[4183]: E0813 19:46:45.418298 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:45 crc kubenswrapper[4183]: I0813 19:46:45.508495 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:45Z is after 2025-06-26T12:47:18Z Aug 13 19:46:46 crc kubenswrapper[4183]: I0813 19:46:46.509767 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:46Z is after 2025-06-26T12:47:18Z Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.209002 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.211679 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.211988 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.212106 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.217114 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:46:47 crc kubenswrapper[4183]: E0813 19:46:47.218997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.509395 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:47Z is after 2025-06-26T12:47:18Z Aug 13 19:46:48 crc kubenswrapper[4183]: I0813 19:46:48.509431 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:48Z is after 2025-06-26T12:47:18Z Aug 13 19:46:49 crc kubenswrapper[4183]: I0813 19:46:49.509521 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:49Z is after 2025-06-26T12:47:18Z Aug 13 19:46:50 crc kubenswrapper[4183]: I0813 19:46:50.511905 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:50Z is after 2025-06-26T12:47:18Z Aug 13 19:46:51 crc kubenswrapper[4183]: E0813 19:46:51.415323 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:51Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.512918 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:51Z is after 2025-06-26T12:47:18Z Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.640738 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.643827 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.643923 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.643941 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.643979 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:51 crc kubenswrapper[4183]: E0813 19:46:51.648044 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:51Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:51 crc kubenswrapper[4183]: E0813 19:46:51.835285 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:51Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:52 crc kubenswrapper[4183]: I0813 19:46:52.508157 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:52Z is after 2025-06-26T12:47:18Z Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.209177 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.211254 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.211362 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.211384 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.214540 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:46:53 crc kubenswrapper[4183]: E0813 19:46:53.216083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.508249 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:53Z is after 2025-06-26T12:47:18Z Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.509012 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:54Z is after 2025-06-26T12:47:18Z Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.660046 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.660276 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.660354 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.660430 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.660490 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:46:54 crc kubenswrapper[4183]: W0813 19:46:54.762914 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:54Z is after 2025-06-26T12:47:18Z Aug 13 19:46:54 crc kubenswrapper[4183]: E0813 19:46:54.763075 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:54Z is after 2025-06-26T12:47:18Z Aug 13 19:46:55 crc kubenswrapper[4183]: E0813 19:46:55.419283 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:55 crc kubenswrapper[4183]: I0813 19:46:55.507740 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:55Z is after 2025-06-26T12:47:18Z Aug 13 19:46:56 crc kubenswrapper[4183]: W0813 19:46:56.316182 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:56Z is after 2025-06-26T12:47:18Z Aug 13 19:46:56 crc kubenswrapper[4183]: E0813 19:46:56.317742 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:56Z is after 2025-06-26T12:47:18Z Aug 13 19:46:56 crc kubenswrapper[4183]: I0813 19:46:56.507468 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:56Z is after 2025-06-26T12:47:18Z Aug 13 19:46:57 crc kubenswrapper[4183]: I0813 19:46:57.510435 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:57Z is after 2025-06-26T12:47:18Z Aug 13 19:46:58 crc kubenswrapper[4183]: E0813 19:46:58.420378 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:58Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.510520 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:58Z is after 2025-06-26T12:47:18Z Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.648586 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.650512 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.650638 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.650666 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.650710 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:58 crc kubenswrapper[4183]: E0813 19:46:58.655036 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:58Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:59 crc kubenswrapper[4183]: I0813 19:46:59.507745 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:59Z is after 2025-06-26T12:47:18Z Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.209201 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.210994 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.211078 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.211095 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.212387 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:47:00 crc kubenswrapper[4183]: E0813 19:47:00.212844 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.507343 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:00Z is after 2025-06-26T12:47:18Z Aug 13 19:47:01 crc kubenswrapper[4183]: I0813 19:47:01.209026 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:01 crc kubenswrapper[4183]: I0813 19:47:01.210969 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:01 crc kubenswrapper[4183]: I0813 19:47:01.211158 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:01 crc kubenswrapper[4183]: I0813 19:47:01.211204 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:01 crc kubenswrapper[4183]: I0813 19:47:01.508677 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:01Z is after 2025-06-26T12:47:18Z Aug 13 19:47:01 crc kubenswrapper[4183]: E0813 19:47:01.841030 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:01Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:02 crc kubenswrapper[4183]: I0813 19:47:02.508683 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:02Z is after 2025-06-26T12:47:18Z Aug 13 19:47:03 crc kubenswrapper[4183]: I0813 19:47:03.508739 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:03Z is after 2025-06-26T12:47:18Z Aug 13 19:47:04 crc kubenswrapper[4183]: W0813 19:47:04.417066 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:04Z is after 2025-06-26T12:47:18Z Aug 13 19:47:04 crc kubenswrapper[4183]: E0813 19:47:04.417169 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:04Z is after 2025-06-26T12:47:18Z Aug 13 19:47:04 crc kubenswrapper[4183]: I0813 19:47:04.509200 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:04Z is after 2025-06-26T12:47:18Z Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.208117 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.208129 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.209842 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.209912 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.209931 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.210577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.210707 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.210721 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.211447 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:47:05 crc kubenswrapper[4183]: E0813 19:47:05.212145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:47:05 crc kubenswrapper[4183]: E0813 19:47:05.419590 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:05 crc kubenswrapper[4183]: E0813 19:47:05.424203 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:05Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.507503 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:05Z is after 2025-06-26T12:47:18Z Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.655938 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.657746 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.657885 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.657904 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.657938 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:05 crc kubenswrapper[4183]: E0813 19:47:05.661734 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:05Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:06 crc kubenswrapper[4183]: I0813 19:47:06.507595 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:06Z is after 2025-06-26T12:47:18Z Aug 13 19:47:07 crc kubenswrapper[4183]: I0813 19:47:07.508034 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:07Z is after 2025-06-26T12:47:18Z Aug 13 19:47:08 crc kubenswrapper[4183]: I0813 19:47:08.509584 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:08Z is after 2025-06-26T12:47:18Z Aug 13 19:47:09 crc kubenswrapper[4183]: I0813 19:47:09.508409 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:09Z is after 2025-06-26T12:47:18Z Aug 13 19:47:10 crc kubenswrapper[4183]: I0813 19:47:10.508936 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:10Z is after 2025-06-26T12:47:18Z Aug 13 19:47:11 crc kubenswrapper[4183]: I0813 19:47:11.508554 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:11Z is after 2025-06-26T12:47:18Z Aug 13 19:47:11 crc kubenswrapper[4183]: E0813 19:47:11.846977 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:11Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:12 crc kubenswrapper[4183]: E0813 19:47:12.429244 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:12Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.508767 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:12Z is after 2025-06-26T12:47:18Z Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.662198 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.664100 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.664207 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.664223 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.664255 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:12 crc kubenswrapper[4183]: E0813 19:47:12.667699 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:12Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:13 crc kubenswrapper[4183]: I0813 19:47:13.507705 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:13Z is after 2025-06-26T12:47:18Z Aug 13 19:47:14 crc kubenswrapper[4183]: I0813 19:47:14.511515 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:14Z is after 2025-06-26T12:47:18Z Aug 13 19:47:14 crc kubenswrapper[4183]: I0813 19:47:14.969073 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:47:14 crc kubenswrapper[4183]: E0813 19:47:14.974040 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:14Z is after 2025-06-26T12:47:18Z Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.211738 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.214599 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.215001 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.216039 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.223661 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:47:15 crc kubenswrapper[4183]: E0813 19:47:15.224342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:15 crc kubenswrapper[4183]: E0813 19:47:15.419994 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.507591 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:15Z is after 2025-06-26T12:47:18Z Aug 13 19:47:16 crc kubenswrapper[4183]: I0813 19:47:16.508495 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:16Z is after 2025-06-26T12:47:18Z Aug 13 19:47:17 crc kubenswrapper[4183]: I0813 19:47:17.507568 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:17Z is after 2025-06-26T12:47:18Z Aug 13 19:47:18 crc kubenswrapper[4183]: W0813 19:47:18.411205 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:18Z is after 2025-06-26T12:47:18Z Aug 13 19:47:18 crc kubenswrapper[4183]: E0813 19:47:18.411326 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:18Z is after 2025-06-26T12:47:18Z Aug 13 19:47:18 crc kubenswrapper[4183]: I0813 19:47:18.508359 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:18Z is after 2025-06-26T12:47:18Z Aug 13 19:47:19 crc kubenswrapper[4183]: E0813 19:47:19.433994 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:19Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.507416 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:19Z is after 2025-06-26T12:47:18Z Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.668611 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.671730 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.671895 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.671913 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.671939 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:19 crc kubenswrapper[4183]: E0813 19:47:19.675885 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:19Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.209458 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.212833 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.212929 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.212950 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.214992 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.508204 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:20Z is after 2025-06-26T12:47:18Z Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.719999 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/4.log" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.722455 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.722689 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.723695 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.723877 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.723902 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.509064 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:21Z is after 2025-06-26T12:47:18Z Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.558967 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.725590 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.727073 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.727166 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.727188 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:21 crc kubenswrapper[4183]: E0813 19:47:21.851710 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:21Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:22 crc kubenswrapper[4183]: I0813 19:47:22.509575 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:22Z is after 2025-06-26T12:47:18Z Aug 13 19:47:23 crc kubenswrapper[4183]: I0813 19:47:23.509622 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:23Z is after 2025-06-26T12:47:18Z Aug 13 19:47:24 crc kubenswrapper[4183]: I0813 19:47:24.508707 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:24Z is after 2025-06-26T12:47:18Z Aug 13 19:47:25 crc kubenswrapper[4183]: E0813 19:47:25.420710 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:25 crc kubenswrapper[4183]: I0813 19:47:25.509082 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:25Z is after 2025-06-26T12:47:18Z Aug 13 19:47:26 crc kubenswrapper[4183]: E0813 19:47:26.438944 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:26Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.509324 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:26Z is after 2025-06-26T12:47:18Z Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.676882 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.678202 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.678232 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.678257 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.678283 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:26 crc kubenswrapper[4183]: E0813 19:47:26.683126 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:26Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:27 crc kubenswrapper[4183]: I0813 19:47:27.508125 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:27Z is after 2025-06-26T12:47:18Z Aug 13 19:47:28 crc kubenswrapper[4183]: I0813 19:47:28.512301 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:28Z is after 2025-06-26T12:47:18Z Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.208320 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.210618 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.210723 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.210741 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.212256 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.508562 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:29Z is after 2025-06-26T12:47:18Z Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.581158 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.582083 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.584193 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.584290 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.584311 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.508950 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:30Z is after 2025-06-26T12:47:18Z Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.759441 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/4.log" Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.762178 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf"} Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.762366 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.763392 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.763448 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.763468 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.509020 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:47:18Z Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.768549 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.769851 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/4.log" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.772701 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" exitCode=255 Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.772760 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf"} Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.772973 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.773033 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.774486 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.774511 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.774525 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.775962 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:47:31 crc kubenswrapper[4183]: E0813 19:47:31.776312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:31 crc kubenswrapper[4183]: E0813 19:47:31.858065 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:31 crc kubenswrapper[4183]: E0813 19:47:31.858166 4183 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:31 crc kubenswrapper[4183]: E0813 19:47:31.862471 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:32 crc kubenswrapper[4183]: I0813 19:47:32.510068 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:32Z is after 2025-06-26T12:47:18Z Aug 13 19:47:32 crc kubenswrapper[4183]: I0813 19:47:32.581916 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:47:32 crc kubenswrapper[4183]: I0813 19:47:32.582098 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:47:32 crc kubenswrapper[4183]: I0813 19:47:32.780905 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 19:47:33 crc kubenswrapper[4183]: E0813 19:47:33.454745 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:33Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.510367 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:33Z is after 2025-06-26T12:47:18Z Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.685376 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.687753 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.687831 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.687856 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.687888 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:33 crc kubenswrapper[4183]: E0813 19:47:33.697290 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:33Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.509279 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:34Z is after 2025-06-26T12:47:18Z Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.891441 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.891615 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.893176 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.893232 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.893250 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.895909 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:47:34 crc kubenswrapper[4183]: E0813 19:47:34.896584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:35 crc kubenswrapper[4183]: E0813 19:47:35.422135 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:35 crc kubenswrapper[4183]: I0813 19:47:35.508730 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:35Z is after 2025-06-26T12:47:18Z Aug 13 19:47:36 crc kubenswrapper[4183]: I0813 19:47:36.507939 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:36Z is after 2025-06-26T12:47:18Z Aug 13 19:47:36 crc kubenswrapper[4183]: E0813 19:47:36.808517 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:36Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.507996 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:37Z is after 2025-06-26T12:47:18Z Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.564200 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.564474 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.565916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.565990 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.566009 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.567289 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:47:37 crc kubenswrapper[4183]: E0813 19:47:37.567716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:38 crc kubenswrapper[4183]: I0813 19:47:38.509349 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:38Z is after 2025-06-26T12:47:18Z Aug 13 19:47:39 crc kubenswrapper[4183]: I0813 19:47:39.508117 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:39Z is after 2025-06-26T12:47:18Z Aug 13 19:47:40 crc kubenswrapper[4183]: E0813 19:47:40.462748 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:40Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.508756 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:40Z is after 2025-06-26T12:47:18Z Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.698172 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.700280 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.700388 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.700409 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.700442 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:40 crc kubenswrapper[4183]: E0813 19:47:40.709132 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:40Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:41 crc kubenswrapper[4183]: I0813 19:47:41.512169 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:41Z is after 2025-06-26T12:47:18Z Aug 13 19:47:42 crc kubenswrapper[4183]: I0813 19:47:42.507757 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:42Z is after 2025-06-26T12:47:18Z Aug 13 19:47:42 crc kubenswrapper[4183]: I0813 19:47:42.582073 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:47:42 crc kubenswrapper[4183]: I0813 19:47:42.582216 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Aug 13 19:47:43 crc kubenswrapper[4183]: I0813 19:47:43.508350 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:43Z is after 2025-06-26T12:47:18Z Aug 13 19:47:44 crc kubenswrapper[4183]: I0813 19:47:44.508294 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:44Z is after 2025-06-26T12:47:18Z Aug 13 19:47:45 crc kubenswrapper[4183]: E0813 19:47:45.422727 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:45 crc kubenswrapper[4183]: I0813 19:47:45.509076 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:45Z is after 2025-06-26T12:47:18Z Aug 13 19:47:46 crc kubenswrapper[4183]: I0813 19:47:46.508744 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:46Z is after 2025-06-26T12:47:18Z Aug 13 19:47:46 crc kubenswrapper[4183]: E0813 19:47:46.812453 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:46Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:46 crc kubenswrapper[4183]: I0813 19:47:46.969286 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:47:46 crc kubenswrapper[4183]: E0813 19:47:46.975593 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:46Z is after 2025-06-26T12:47:18Z Aug 13 19:47:47 crc kubenswrapper[4183]: E0813 19:47:47.467519 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:47Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.509582 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:47Z is after 2025-06-26T12:47:18Z Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.709930 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.713739 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.713967 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.713987 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.714020 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:47 crc kubenswrapper[4183]: E0813 19:47:47.718181 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:47Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:48 crc kubenswrapper[4183]: W0813 19:47:48.118499 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:48Z is after 2025-06-26T12:47:18Z Aug 13 19:47:48 crc kubenswrapper[4183]: E0813 19:47:48.118609 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:48Z is after 2025-06-26T12:47:18Z Aug 13 19:47:48 crc kubenswrapper[4183]: I0813 19:47:48.508468 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:48Z is after 2025-06-26T12:47:18Z Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.209234 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.210976 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.211070 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.211093 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.212341 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:47:49 crc kubenswrapper[4183]: E0813 19:47:49.212814 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.507056 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.508037 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:50Z is after 2025-06-26T12:47:18Z Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.893941 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:42490->192.168.126.11:10357: read: connection reset by peer" start-of-body= Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.894144 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:42490->192.168.126.11:10357: read: connection reset by peer" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.894229 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.894387 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.896037 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.896145 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.896164 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.898064 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.898425 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" gracePeriod=30 Aug 13 19:47:51 crc kubenswrapper[4183]: E0813 19:47:51.022282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:47:51 crc kubenswrapper[4183]: W0813 19:47:51.416612 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:51Z is after 2025-06-26T12:47:18Z Aug 13 19:47:51 crc kubenswrapper[4183]: E0813 19:47:51.416762 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:51Z is after 2025-06-26T12:47:18Z Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.509431 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:51Z is after 2025-06-26T12:47:18Z Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.849917 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.851326 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/4.log" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.854231 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" exitCode=255 Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.854297 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.854357 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.854491 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.856077 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.856150 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.856167 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.857494 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:47:51 crc kubenswrapper[4183]: E0813 19:47:51.859186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:47:52 crc kubenswrapper[4183]: I0813 19:47:52.507851 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:52Z is after 2025-06-26T12:47:18Z Aug 13 19:47:52 crc kubenswrapper[4183]: I0813 19:47:52.859598 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 19:47:53 crc kubenswrapper[4183]: I0813 19:47:53.508336 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:53Z is after 2025-06-26T12:47:18Z Aug 13 19:47:53 crc kubenswrapper[4183]: W0813 19:47:53.683937 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:53Z is after 2025-06-26T12:47:18Z Aug 13 19:47:53 crc kubenswrapper[4183]: E0813 19:47:53.684046 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:53Z is after 2025-06-26T12:47:18Z Aug 13 19:47:54 crc kubenswrapper[4183]: E0813 19:47:54.472244 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:54Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.507411 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:54Z is after 2025-06-26T12:47:18Z Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.661003 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.661149 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.661179 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.661211 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.661232 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.719219 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.721408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.721483 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.721506 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.721536 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:54 crc kubenswrapper[4183]: E0813 19:47:54.725028 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:54Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:55 crc kubenswrapper[4183]: E0813 19:47:55.424009 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:55 crc kubenswrapper[4183]: I0813 19:47:55.508465 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:55Z is after 2025-06-26T12:47:18Z Aug 13 19:47:56 crc kubenswrapper[4183]: I0813 19:47:56.509220 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:56Z is after 2025-06-26T12:47:18Z Aug 13 19:47:56 crc kubenswrapper[4183]: E0813 19:47:56.817564 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:56Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:57 crc kubenswrapper[4183]: I0813 19:47:57.508461 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:57Z is after 2025-06-26T12:47:18Z Aug 13 19:47:58 crc kubenswrapper[4183]: I0813 19:47:58.508564 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:58Z is after 2025-06-26T12:47:18Z Aug 13 19:47:59 crc kubenswrapper[4183]: I0813 19:47:59.508359 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:59Z is after 2025-06-26T12:47:18Z Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.208959 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.211507 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.211677 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.211760 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.507257 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:00Z is after 2025-06-26T12:47:18Z Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.518471 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.518721 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.520642 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.520730 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.520752 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.522654 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:48:00 crc kubenswrapper[4183]: E0813 19:48:00.523656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:48:01 crc kubenswrapper[4183]: E0813 19:48:01.478172 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:01Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.507668 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:01Z is after 2025-06-26T12:47:18Z Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.725214 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.727276 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.727358 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.727388 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.727437 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:01 crc kubenswrapper[4183]: E0813 19:48:01.737482 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:01Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:02 crc kubenswrapper[4183]: I0813 19:48:02.509365 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:02Z is after 2025-06-26T12:47:18Z Aug 13 19:48:03 crc kubenswrapper[4183]: I0813 19:48:03.507856 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:03Z is after 2025-06-26T12:47:18Z Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.208541 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.210210 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.210255 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.210268 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.211905 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:48:04 crc kubenswrapper[4183]: E0813 19:48:04.212462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.508972 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:04Z is after 2025-06-26T12:47:18Z Aug 13 19:48:05 crc kubenswrapper[4183]: E0813 19:48:05.424977 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:05 crc kubenswrapper[4183]: I0813 19:48:05.510017 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:05Z is after 2025-06-26T12:47:18Z Aug 13 19:48:06 crc kubenswrapper[4183]: I0813 19:48:06.509015 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:06Z is after 2025-06-26T12:47:18Z Aug 13 19:48:06 crc kubenswrapper[4183]: E0813 19:48:06.823687 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:06Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:07 crc kubenswrapper[4183]: I0813 19:48:07.507939 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:07Z is after 2025-06-26T12:47:18Z Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.208284 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.210308 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.210385 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.210406 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:08 crc kubenswrapper[4183]: E0813 19:48:08.482375 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:08Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.529238 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:08Z is after 2025-06-26T12:47:18Z Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.737626 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.739132 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.739309 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.739371 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.739419 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:08 crc kubenswrapper[4183]: E0813 19:48:08.742847 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:08Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:09 crc kubenswrapper[4183]: I0813 19:48:09.508101 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:09Z is after 2025-06-26T12:47:18Z Aug 13 19:48:10 crc kubenswrapper[4183]: I0813 19:48:10.509171 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:10Z is after 2025-06-26T12:47:18Z Aug 13 19:48:11 crc kubenswrapper[4183]: I0813 19:48:11.507065 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:11Z is after 2025-06-26T12:47:18Z Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.208022 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.209637 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.209725 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.209746 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.211524 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:48:12 crc kubenswrapper[4183]: E0813 19:48:12.212281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.508424 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:12Z is after 2025-06-26T12:47:18Z Aug 13 19:48:13 crc kubenswrapper[4183]: I0813 19:48:13.508153 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:13Z is after 2025-06-26T12:47:18Z Aug 13 19:48:14 crc kubenswrapper[4183]: I0813 19:48:14.508084 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:14Z is after 2025-06-26T12:47:18Z Aug 13 19:48:14 crc kubenswrapper[4183]: W0813 19:48:14.894124 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:14Z is after 2025-06-26T12:47:18Z Aug 13 19:48:14 crc kubenswrapper[4183]: E0813 19:48:14.894223 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:14Z is after 2025-06-26T12:47:18Z Aug 13 19:48:15 crc kubenswrapper[4183]: E0813 19:48:15.425881 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:15 crc kubenswrapper[4183]: E0813 19:48:15.486630 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:15Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.507913 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:15Z is after 2025-06-26T12:47:18Z Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.743079 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.745850 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.745922 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.745935 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.745967 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:15 crc kubenswrapper[4183]: E0813 19:48:15.756009 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:15Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:16 crc kubenswrapper[4183]: I0813 19:48:16.507684 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:16Z is after 2025-06-26T12:47:18Z Aug 13 19:48:16 crc kubenswrapper[4183]: E0813 19:48:16.828651 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:16Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:17 crc kubenswrapper[4183]: I0813 19:48:17.508695 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:17Z is after 2025-06-26T12:47:18Z Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.208882 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.210241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.210305 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.210322 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.211535 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:48:18 crc kubenswrapper[4183]: E0813 19:48:18.212120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.509491 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:18Z is after 2025-06-26T12:47:18Z Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.969699 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:48:18 crc kubenswrapper[4183]: E0813 19:48:18.974609 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:18Z is after 2025-06-26T12:47:18Z Aug 13 19:48:19 crc kubenswrapper[4183]: I0813 19:48:19.507337 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:19Z is after 2025-06-26T12:47:18Z Aug 13 19:48:20 crc kubenswrapper[4183]: I0813 19:48:20.509878 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:20Z is after 2025-06-26T12:47:18Z Aug 13 19:48:21 crc kubenswrapper[4183]: I0813 19:48:21.507142 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:21Z is after 2025-06-26T12:47:18Z Aug 13 19:48:22 crc kubenswrapper[4183]: E0813 19:48:22.492982 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:22Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.509072 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:22Z is after 2025-06-26T12:47:18Z Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.756562 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.758373 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.758482 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.758512 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.758553 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:22 crc kubenswrapper[4183]: E0813 19:48:22.762269 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:22Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:23 crc kubenswrapper[4183]: I0813 19:48:23.508701 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:23Z is after 2025-06-26T12:47:18Z Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.208815 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.210276 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.210355 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.210373 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.216334 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:48:24 crc kubenswrapper[4183]: E0813 19:48:24.218140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.508837 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:24Z is after 2025-06-26T12:47:18Z Aug 13 19:48:25 crc kubenswrapper[4183]: I0813 19:48:25.208831 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:25 crc kubenswrapper[4183]: I0813 19:48:25.210125 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:25 crc kubenswrapper[4183]: I0813 19:48:25.210182 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:25 crc kubenswrapper[4183]: I0813 19:48:25.210202 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:25 crc kubenswrapper[4183]: E0813 19:48:25.427029 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:25 crc kubenswrapper[4183]: I0813 19:48:25.507028 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:25Z is after 2025-06-26T12:47:18Z Aug 13 19:48:26 crc kubenswrapper[4183]: I0813 19:48:26.509146 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:26Z is after 2025-06-26T12:47:18Z Aug 13 19:48:26 crc kubenswrapper[4183]: E0813 19:48:26.834373 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:26Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:27 crc kubenswrapper[4183]: I0813 19:48:27.508562 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:27Z is after 2025-06-26T12:47:18Z Aug 13 19:48:28 crc kubenswrapper[4183]: W0813 19:48:28.188409 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:28Z is after 2025-06-26T12:47:18Z Aug 13 19:48:28 crc kubenswrapper[4183]: E0813 19:48:28.188557 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:28Z is after 2025-06-26T12:47:18Z Aug 13 19:48:28 crc kubenswrapper[4183]: I0813 19:48:28.507603 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:28Z is after 2025-06-26T12:47:18Z Aug 13 19:48:29 crc kubenswrapper[4183]: E0813 19:48:29.500911 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:29Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.511026 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:29Z is after 2025-06-26T12:47:18Z Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.762589 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.764070 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.764293 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.764311 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.764341 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:29 crc kubenswrapper[4183]: E0813 19:48:29.768854 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:29Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:30 crc kubenswrapper[4183]: I0813 19:48:30.517188 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:30Z is after 2025-06-26T12:47:18Z Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.209108 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.210715 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.210953 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.210994 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.212398 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:48:31 crc kubenswrapper[4183]: E0813 19:48:31.212827 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.507232 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:31Z is after 2025-06-26T12:47:18Z Aug 13 19:48:32 crc kubenswrapper[4183]: I0813 19:48:32.507707 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:32Z is after 2025-06-26T12:47:18Z Aug 13 19:48:33 crc kubenswrapper[4183]: I0813 19:48:33.508146 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:33Z is after 2025-06-26T12:47:18Z Aug 13 19:48:34 crc kubenswrapper[4183]: I0813 19:48:34.507587 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:34Z is after 2025-06-26T12:47:18Z Aug 13 19:48:35 crc kubenswrapper[4183]: E0813 19:48:35.428027 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:35 crc kubenswrapper[4183]: I0813 19:48:35.507216 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:35Z is after 2025-06-26T12:47:18Z Aug 13 19:48:36 crc kubenswrapper[4183]: E0813 19:48:36.505587 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.507713 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z Aug 13 19:48:36 crc kubenswrapper[4183]: W0813 19:48:36.568675 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z Aug 13 19:48:36 crc kubenswrapper[4183]: E0813 19:48:36.568942 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.769224 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.770923 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.770997 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.771012 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.771107 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:36 crc kubenswrapper[4183]: E0813 19:48:36.778389 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:36 crc kubenswrapper[4183]: E0813 19:48:36.842056 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.209111 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.211424 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.211536 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.211552 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.213289 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:48:37 crc kubenswrapper[4183]: E0813 19:48:37.214054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.507690 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:37Z is after 2025-06-26T12:47:18Z Aug 13 19:48:38 crc kubenswrapper[4183]: I0813 19:48:38.510445 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:38Z is after 2025-06-26T12:47:18Z Aug 13 19:48:39 crc kubenswrapper[4183]: I0813 19:48:39.508593 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:39Z is after 2025-06-26T12:47:18Z Aug 13 19:48:40 crc kubenswrapper[4183]: I0813 19:48:40.509016 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:40Z is after 2025-06-26T12:47:18Z Aug 13 19:48:41 crc kubenswrapper[4183]: I0813 19:48:41.508595 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:41Z is after 2025-06-26T12:47:18Z Aug 13 19:48:41 crc kubenswrapper[4183]: W0813 19:48:41.776148 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:41Z is after 2025-06-26T12:47:18Z Aug 13 19:48:41 crc kubenswrapper[4183]: E0813 19:48:41.776301 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:41Z is after 2025-06-26T12:47:18Z Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.208554 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.210237 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.210343 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.210366 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.212399 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:48:42 crc kubenswrapper[4183]: E0813 19:48:42.213095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.509594 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:42Z is after 2025-06-26T12:47:18Z Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.508017 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:43Z is after 2025-06-26T12:47:18Z Aug 13 19:48:43 crc kubenswrapper[4183]: E0813 19:48:43.510283 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:43Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.779767 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.781546 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.781625 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.781640 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.781718 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:43 crc kubenswrapper[4183]: E0813 19:48:43.785898 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:43Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:44 crc kubenswrapper[4183]: I0813 19:48:44.508607 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:44Z is after 2025-06-26T12:47:18Z Aug 13 19:48:45 crc kubenswrapper[4183]: E0813 19:48:45.428864 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:45 crc kubenswrapper[4183]: I0813 19:48:45.508371 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:45Z is after 2025-06-26T12:47:18Z Aug 13 19:48:46 crc kubenswrapper[4183]: I0813 19:48:46.507471 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:46Z is after 2025-06-26T12:47:18Z Aug 13 19:48:46 crc kubenswrapper[4183]: E0813 19:48:46.846934 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:46Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:47 crc kubenswrapper[4183]: I0813 19:48:47.507610 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:47Z is after 2025-06-26T12:47:18Z Aug 13 19:48:48 crc kubenswrapper[4183]: I0813 19:48:48.507390 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:48Z is after 2025-06-26T12:47:18Z Aug 13 19:48:49 crc kubenswrapper[4183]: I0813 19:48:49.508906 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:49Z is after 2025-06-26T12:47:18Z Aug 13 19:48:49 crc kubenswrapper[4183]: W0813 19:48:49.644590 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:49Z is after 2025-06-26T12:47:18Z Aug 13 19:48:49 crc kubenswrapper[4183]: E0813 19:48:49.644687 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:49Z is after 2025-06-26T12:47:18Z Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.209026 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.210881 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.210972 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.210991 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.212513 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:48:50 crc kubenswrapper[4183]: E0813 19:48:50.213372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.508219 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:50Z is after 2025-06-26T12:47:18Z Aug 13 19:48:50 crc kubenswrapper[4183]: E0813 19:48:50.516201 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:50Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.787053 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.788997 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.789071 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.789090 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.789120 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:50 crc kubenswrapper[4183]: E0813 19:48:50.792941 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:50Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.969403 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:48:50 crc kubenswrapper[4183]: E0813 19:48:50.974173 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:50Z is after 2025-06-26T12:47:18Z Aug 13 19:48:51 crc kubenswrapper[4183]: I0813 19:48:51.507882 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:51Z is after 2025-06-26T12:47:18Z Aug 13 19:48:52 crc kubenswrapper[4183]: I0813 19:48:52.508568 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:52Z is after 2025-06-26T12:47:18Z Aug 13 19:48:53 crc kubenswrapper[4183]: I0813 19:48:53.508038 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:53Z is after 2025-06-26T12:47:18Z Aug 13 19:48:54 crc kubenswrapper[4183]: E0813 19:48:54.270502 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:54 crc kubenswrapper[4183]: E0813 19:48:54.288343 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.508609 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:54Z is after 2025-06-26T12:47:18Z Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.662493 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.662615 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.662669 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.662703 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.662726 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.208643 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.210367 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.210468 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.210485 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.212012 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:48:55 crc kubenswrapper[4183]: E0813 19:48:55.212463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:48:55 crc kubenswrapper[4183]: E0813 19:48:55.269841 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:55 crc kubenswrapper[4183]: E0813 19:48:55.429093 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.507617 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:55Z is after 2025-06-26T12:47:18Z Aug 13 19:48:56 crc kubenswrapper[4183]: E0813 19:48:56.269957 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:56 crc kubenswrapper[4183]: I0813 19:48:56.508306 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:56Z is after 2025-06-26T12:47:18Z Aug 13 19:48:56 crc kubenswrapper[4183]: E0813 19:48:56.851929 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:56Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:57 crc kubenswrapper[4183]: E0813 19:48:57.270448 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.508358 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:57Z is after 2025-06-26T12:47:18Z Aug 13 19:48:57 crc kubenswrapper[4183]: E0813 19:48:57.520207 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:57Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.793333 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.794972 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.795949 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.795969 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.796000 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:57 crc kubenswrapper[4183]: E0813 19:48:57.801474 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:57Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:58 crc kubenswrapper[4183]: E0813 19:48:58.271041 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:58 crc kubenswrapper[4183]: I0813 19:48:58.508150 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:58Z is after 2025-06-26T12:47:18Z Aug 13 19:48:59 crc kubenswrapper[4183]: E0813 19:48:59.270193 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:59 crc kubenswrapper[4183]: I0813 19:48:59.507252 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:59Z is after 2025-06-26T12:47:18Z Aug 13 19:49:00 crc kubenswrapper[4183]: E0813 19:49:00.270093 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:00 crc kubenswrapper[4183]: I0813 19:49:00.507642 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:00Z is after 2025-06-26T12:47:18Z Aug 13 19:49:01 crc kubenswrapper[4183]: E0813 19:49:01.270053 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:01 crc kubenswrapper[4183]: I0813 19:49:01.507537 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:01Z is after 2025-06-26T12:47:18Z Aug 13 19:49:02 crc kubenswrapper[4183]: E0813 19:49:02.270147 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:02 crc kubenswrapper[4183]: I0813 19:49:02.509575 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:02Z is after 2025-06-26T12:47:18Z Aug 13 19:49:03 crc kubenswrapper[4183]: E0813 19:49:03.270170 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:03 crc kubenswrapper[4183]: I0813 19:49:03.508092 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:03Z is after 2025-06-26T12:47:18Z Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.208680 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.210579 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.210675 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.210693 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.212286 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:49:04 crc kubenswrapper[4183]: E0813 19:49:04.213044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:49:04 crc kubenswrapper[4183]: E0813 19:49:04.270334 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:04 crc kubenswrapper[4183]: E0813 19:49:04.289000 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.508476 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:04Z is after 2025-06-26T12:47:18Z Aug 13 19:49:04 crc kubenswrapper[4183]: E0813 19:49:04.525146 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:04Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.801907 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.803210 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.803297 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.803313 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.803375 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:04 crc kubenswrapper[4183]: E0813 19:49:04.807056 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:04Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:49:05 crc kubenswrapper[4183]: E0813 19:49:05.270106 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:05 crc kubenswrapper[4183]: E0813 19:49:05.430194 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:05 crc kubenswrapper[4183]: I0813 19:49:05.507344 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:05Z is after 2025-06-26T12:47:18Z Aug 13 19:49:06 crc kubenswrapper[4183]: E0813 19:49:06.270028 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:06 crc kubenswrapper[4183]: I0813 19:49:06.507669 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:06Z is after 2025-06-26T12:47:18Z Aug 13 19:49:06 crc kubenswrapper[4183]: E0813 19:49:06.858719 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:06Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:07 crc kubenswrapper[4183]: E0813 19:49:07.270036 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:07 crc kubenswrapper[4183]: I0813 19:49:07.507038 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:07Z is after 2025-06-26T12:47:18Z Aug 13 19:49:08 crc kubenswrapper[4183]: E0813 19:49:08.270104 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:08 crc kubenswrapper[4183]: I0813 19:49:08.509900 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:08Z is after 2025-06-26T12:47:18Z Aug 13 19:49:09 crc kubenswrapper[4183]: I0813 19:49:09.208903 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:09 crc kubenswrapper[4183]: I0813 19:49:09.211430 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:09 crc kubenswrapper[4183]: I0813 19:49:09.211701 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:09 crc kubenswrapper[4183]: I0813 19:49:09.212927 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:09 crc kubenswrapper[4183]: E0813 19:49:09.270291 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:09 crc kubenswrapper[4183]: I0813 19:49:09.507217 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:09Z is after 2025-06-26T12:47:18Z Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.209179 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.210653 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.210693 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.210705 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.212199 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:49:10 crc kubenswrapper[4183]: E0813 19:49:10.212558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:49:10 crc kubenswrapper[4183]: E0813 19:49:10.270044 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.507652 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:10Z is after 2025-06-26T12:47:18Z Aug 13 19:49:10 crc kubenswrapper[4183]: W0813 19:49:10.663149 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:10Z is after 2025-06-26T12:47:18Z Aug 13 19:49:10 crc kubenswrapper[4183]: E0813 19:49:10.663336 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:10Z is after 2025-06-26T12:47:18Z Aug 13 19:49:11 crc kubenswrapper[4183]: E0813 19:49:11.270245 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.507705 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:11Z is after 2025-06-26T12:47:18Z Aug 13 19:49:11 crc kubenswrapper[4183]: E0813 19:49:11.530195 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:11Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.808058 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.817246 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.817337 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.817360 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.817390 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:11 crc kubenswrapper[4183]: E0813 19:49:11.820833 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:11Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:49:12 crc kubenswrapper[4183]: E0813 19:49:12.270100 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:12 crc kubenswrapper[4183]: I0813 19:49:12.508425 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:12Z is after 2025-06-26T12:47:18Z Aug 13 19:49:13 crc kubenswrapper[4183]: E0813 19:49:13.270198 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:13 crc kubenswrapper[4183]: I0813 19:49:13.511476 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:13Z is after 2025-06-26T12:47:18Z Aug 13 19:49:14 crc kubenswrapper[4183]: E0813 19:49:14.270548 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:14 crc kubenswrapper[4183]: E0813 19:49:14.289133 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:14 crc kubenswrapper[4183]: I0813 19:49:14.510249 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:14Z is after 2025-06-26T12:47:18Z Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.208334 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.209861 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.209947 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.209964 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.211520 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:49:15 crc kubenswrapper[4183]: E0813 19:49:15.270310 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:15 crc kubenswrapper[4183]: E0813 19:49:15.430490 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.509289 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:15Z is after 2025-06-26T12:47:18Z Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.152473 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.154216 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a"} Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.154441 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.155448 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.155521 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.155541 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:16 crc kubenswrapper[4183]: E0813 19:49:16.270299 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.509020 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:16Z is after 2025-06-26T12:47:18Z Aug 13 19:49:16 crc kubenswrapper[4183]: E0813 19:49:16.868279 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:16Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:16 crc kubenswrapper[4183]: E0813 19:49:16.868752 4183 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:16 crc kubenswrapper[4183]: E0813 19:49:16.874032 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:16Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:17 crc kubenswrapper[4183]: E0813 19:49:17.270118 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:17 crc kubenswrapper[4183]: I0813 19:49:17.508401 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:17Z is after 2025-06-26T12:47:18Z Aug 13 19:49:18 crc kubenswrapper[4183]: E0813 19:49:18.270308 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.509244 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:18Z is after 2025-06-26T12:47:18Z Aug 13 19:49:18 crc kubenswrapper[4183]: E0813 19:49:18.536924 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:18Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.821885 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.823712 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.823832 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.823855 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.823894 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:18 crc kubenswrapper[4183]: E0813 19:49:18.828073 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:18Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:49:19 crc kubenswrapper[4183]: E0813 19:49:19.270703 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:19 crc kubenswrapper[4183]: W0813 19:49:19.467073 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:19Z is after 2025-06-26T12:47:18Z Aug 13 19:49:19 crc kubenswrapper[4183]: E0813 19:49:19.467173 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:19Z is after 2025-06-26T12:47:18Z Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.507924 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:19Z is after 2025-06-26T12:47:18Z Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.581401 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.582025 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.583832 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.583887 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.583910 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:20 crc kubenswrapper[4183]: I0813 19:49:20.209155 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:20 crc kubenswrapper[4183]: I0813 19:49:20.210701 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:20 crc kubenswrapper[4183]: I0813 19:49:20.210840 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:20 crc kubenswrapper[4183]: I0813 19:49:20.210864 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:20 crc kubenswrapper[4183]: E0813 19:49:20.270522 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:20 crc kubenswrapper[4183]: I0813 19:49:20.508890 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:20Z is after 2025-06-26T12:47:18Z Aug 13 19:49:20 crc kubenswrapper[4183]: E0813 19:49:20.708455 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:20Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:21 crc kubenswrapper[4183]: E0813 19:49:21.269708 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.508993 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:21Z is after 2025-06-26T12:47:18Z Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.558277 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.558456 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.561552 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.561720 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.561906 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.209056 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.210374 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.210467 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.210483 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.211652 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:49:22 crc kubenswrapper[4183]: E0813 19:49:22.212112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:49:22 crc kubenswrapper[4183]: E0813 19:49:22.269829 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.507693 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:22Z is after 2025-06-26T12:47:18Z Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.582060 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.582412 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.969453 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:49:22 crc kubenswrapper[4183]: E0813 19:49:22.975151 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:22Z is after 2025-06-26T12:47:18Z Aug 13 19:49:23 crc kubenswrapper[4183]: E0813 19:49:23.270008 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:23 crc kubenswrapper[4183]: I0813 19:49:23.507942 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:23Z is after 2025-06-26T12:47:18Z Aug 13 19:49:24 crc kubenswrapper[4183]: E0813 19:49:24.270019 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:24 crc kubenswrapper[4183]: E0813 19:49:24.289754 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:24 crc kubenswrapper[4183]: I0813 19:49:24.507602 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:24Z is after 2025-06-26T12:47:18Z Aug 13 19:49:25 crc kubenswrapper[4183]: E0813 19:49:25.270531 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:25 crc kubenswrapper[4183]: E0813 19:49:25.431533 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.507442 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:25Z is after 2025-06-26T12:47:18Z Aug 13 19:49:25 crc kubenswrapper[4183]: E0813 19:49:25.540981 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:25Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.828733 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.830238 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.830305 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.830323 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.830347 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:25 crc kubenswrapper[4183]: E0813 19:49:25.834565 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:25Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:49:26 crc kubenswrapper[4183]: E0813 19:49:26.270401 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:26 crc kubenswrapper[4183]: I0813 19:49:26.507524 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:26Z is after 2025-06-26T12:47:18Z Aug 13 19:49:27 crc kubenswrapper[4183]: E0813 19:49:27.270871 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:27 crc kubenswrapper[4183]: I0813 19:49:27.508099 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:27Z is after 2025-06-26T12:47:18Z Aug 13 19:49:28 crc kubenswrapper[4183]: E0813 19:49:28.270537 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:28 crc kubenswrapper[4183]: I0813 19:49:28.507909 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:28Z is after 2025-06-26T12:47:18Z Aug 13 19:49:29 crc kubenswrapper[4183]: E0813 19:49:29.270255 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:29 crc kubenswrapper[4183]: I0813 19:49:29.507404 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:29Z is after 2025-06-26T12:47:18Z Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.270553 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:30 crc kubenswrapper[4183]: I0813 19:49:30.509893 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.715971 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.723222 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.85870034 +0000 UTC m=+1.551365198,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.729334 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.8587333 +0000 UTC m=+1.551398038,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.735411 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:54.85874733 +0000 UTC m=+1.551411958,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.744178 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.024230731 +0000 UTC m=+1.716895459,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.748454 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.024667024 +0000 UTC m=+1.717331842,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.751936 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.024686724 +0000 UTC m=+1.717351492,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.756567 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b190ee1238d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.158930317 +0000 UTC m=+1.851595035,LastTimestamp:2025-08-13 19:43:55.158930317 +0000 UTC m=+1.851595035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.761713 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.317392991 +0000 UTC m=+2.010058039,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.767268 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.317419641 +0000 UTC m=+2.010084449,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.773494 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.317434591 +0000 UTC m=+2.010099389,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.780170 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.329246191 +0000 UTC m=+2.021910959,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.788362 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.329270591 +0000 UTC m=+2.021935419,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.794122 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.32928957 +0000 UTC m=+2.021954188,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.799561 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.32933991 +0000 UTC m=+2.022004657,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.804277 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.329369089 +0000 UTC m=+2.022033867,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.809238 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.329383399 +0000 UTC m=+2.022048027,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.814081 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.332498119 +0000 UTC m=+2.025162897,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.819425 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.332519098 +0000 UTC m=+2.025183846,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.824567 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.332533998 +0000 UTC m=+2.025198706,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.829662 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.334421288 +0000 UTC m=+2.027086076,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.834495 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.334438458 +0000 UTC m=+2.027103186,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.839365 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.334449487 +0000 UTC m=+2.027114225,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.845902 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1934520c58 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.787086936 +0000 UTC m=+2.479751734,LastTimestamp:2025-08-13 19:43:55.787086936 +0000 UTC m=+2.479751734,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.851094 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b193452335e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.787096926 +0000 UTC m=+2.479761664,LastTimestamp:2025-08-13 19:43:55.787096926 +0000 UTC m=+2.479761664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.858497 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b193454f3a7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.787277223 +0000 UTC m=+2.479942161,LastTimestamp:2025-08-13 19:43:55.787277223 +0000 UTC m=+2.479942161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.863370 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1934c22012 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.794432018 +0000 UTC m=+2.487096756,LastTimestamp:2025-08-13 19:43:55.794432018 +0000 UTC m=+2.487096756,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.868318 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1935677efa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.805269754 +0000 UTC m=+2.497934402,LastTimestamp:2025-08-13 19:43:55.805269754 +0000 UTC m=+2.497934402,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.873439 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b199886db6b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.468269419 +0000 UTC m=+4.160934207,LastTimestamp:2025-08-13 19:43:57.468269419 +0000 UTC m=+4.160934207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.878613 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1998dd30be openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.473927358 +0000 UTC m=+4.166592086,LastTimestamp:2025-08-13 19:43:57.473927358 +0000 UTC m=+4.166592086,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.883898 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b19999cbe50 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.486480976 +0000 UTC m=+4.179145604,LastTimestamp:2025-08-13 19:43:57.486480976 +0000 UTC m=+4.179145604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.889369 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b1999c204e5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.488923877 +0000 UTC m=+4.181588535,LastTimestamp:2025-08-13 19:43:57.488923877 +0000 UTC m=+4.181588535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.895540 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b199b54a9df openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.515311583 +0000 UTC m=+4.207976331,LastTimestamp:2025-08-13 19:43:57.515311583 +0000 UTC m=+4.207976331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.900880 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b199e67d773 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.566900083 +0000 UTC m=+4.259564721,LastTimestamp:2025-08-13 19:43:57.566900083 +0000 UTC m=+4.259564721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.906976 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b199f3a8cc6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.580709062 +0000 UTC m=+4.273373930,LastTimestamp:2025-08-13 19:43:57.580709062 +0000 UTC m=+4.273373930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.915324 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b199fe9c443 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.592192067 +0000 UTC m=+4.284856765,LastTimestamp:2025-08-13 19:43:57.592192067 +0000 UTC m=+4.284856765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.923950 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19a0082eef openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.594185455 +0000 UTC m=+4.286850313,LastTimestamp:2025-08-13 19:43:57.594185455 +0000 UTC m=+4.286850313,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.929030 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b19a2a80e70 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.638217328 +0000 UTC m=+4.330882056,LastTimestamp:2025-08-13 19:43:57.638217328 +0000 UTC m=+4.330882056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.935053 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19b35fe1a6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.918699942 +0000 UTC m=+4.611364680,LastTimestamp:2025-08-13 19:43:57.918699942 +0000 UTC m=+4.611364680,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.940670 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19ba50d163 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.035153251 +0000 UTC m=+4.727818009,LastTimestamp:2025-08-13 19:43:58.035153251 +0000 UTC m=+4.727818009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.946372 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19ba6c9dae openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.036975022 +0000 UTC m=+4.729639900,LastTimestamp:2025-08-13 19:43:58.036975022 +0000 UTC m=+4.729639900,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.953195 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b19c16e2579 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.154515833 +0000 UTC m=+4.847180581,LastTimestamp:2025-08-13 19:43:58.154515833 +0000 UTC m=+4.847180581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.958937 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b19c770630c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.255325964 +0000 UTC m=+4.947990712,LastTimestamp:2025-08-13 19:43:58.255325964 +0000 UTC m=+4.947990712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.965183 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b19c89e5cea openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.275116266 +0000 UTC m=+4.967781174,LastTimestamp:2025-08-13 19:43:58.275116266 +0000 UTC m=+4.967781174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.971982 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b19c998e3fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.291534842 +0000 UTC m=+4.984199570,LastTimestamp:2025-08-13 19:43:58.291534842 +0000 UTC m=+4.984199570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.978918 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b19cb0fb052 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.316097618 +0000 UTC m=+5.008762296,LastTimestamp:2025-08-13 19:43:58.316097618 +0000 UTC m=+5.008762296,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.984856 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19e5fef6de openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.767986398 +0000 UTC m=+5.460651056,LastTimestamp:2025-08-13 19:43:58.767986398 +0000 UTC m=+5.460651056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.989255 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19fc142bc3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:59.138474947 +0000 UTC m=+5.831139825,LastTimestamp:2025-08-13 19:43:59.138474947 +0000 UTC m=+5.831139825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.994025 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19fc3be3f5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:59.141078005 +0000 UTC m=+5.833742753,LastTimestamp:2025-08-13 19:43:59.141078005 +0000 UTC m=+5.833742753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.999221 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a20af9846 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:59.752640582 +0000 UTC m=+6.445305220,LastTimestamp:2025-08-13 19:43:59.752640582 +0000 UTC m=+6.445305220,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.005263 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a2538788f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:59.828719759 +0000 UTC m=+6.521384507,LastTimestamp:2025-08-13 19:43:59.828719759 +0000 UTC m=+6.521384507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.010708 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b1a33bbabba openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.072199098 +0000 UTC m=+6.764864006,LastTimestamp:2025-08-13 19:44:00.072199098 +0000 UTC m=+6.764864006,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.017311 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1a352c73be openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.09636755 +0000 UTC m=+6.789032298,LastTimestamp:2025-08-13 19:44:00.09636755 +0000 UTC m=+6.789032298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.022588 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a36add0c5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.121622725 +0000 UTC m=+6.814287623,LastTimestamp:2025-08-13 19:44:00.121622725 +0000 UTC m=+6.814287623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.027421 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a36e70dda openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.125373914 +0000 UTC m=+6.818038642,LastTimestamp:2025-08-13 19:44:00.125373914 +0000 UTC m=+6.818038642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.032735 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1a38f39204 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.159748612 +0000 UTC m=+6.852413400,LastTimestamp:2025-08-13 19:44:00.159748612 +0000 UTC m=+6.852413400,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.038190 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a3973e4ef openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.168158447 +0000 UTC m=+6.860823165,LastTimestamp:2025-08-13 19:44:00.168158447 +0000 UTC m=+6.860823165,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.054452 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a39869685 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.169383557 +0000 UTC m=+6.862048295,LastTimestamp:2025-08-13 19:44:00.169383557 +0000 UTC m=+6.862048295,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.060585 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b1a3d2acd3c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.230477116 +0000 UTC m=+6.923141744,LastTimestamp:2025-08-13 19:44:00.230477116 +0000 UTC m=+6.923141744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.066507 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1a3dbdce11 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.240111121 +0000 UTC m=+6.932775859,LastTimestamp:2025-08-13 19:44:00.240111121 +0000 UTC m=+6.932775859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.072140 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1a4f719cb8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.53710764 +0000 UTC m=+7.229772348,LastTimestamp:2025-08-13 19:44:00.53710764 +0000 UTC m=+7.229772348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.078285 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a7478fb6e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.15834763 +0000 UTC m=+7.851012988,LastTimestamp:2025-08-13 19:44:01.15834763 +0000 UTC m=+7.851012988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.089502 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a749b2daa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.160588714 +0000 UTC m=+7.853253362,LastTimestamp:2025-08-13 19:44:01.160588714 +0000 UTC m=+7.853253362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.096173 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a898817aa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.511659434 +0000 UTC m=+8.204324172,LastTimestamp:2025-08-13 19:44:01.511659434 +0000 UTC m=+8.204324172,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.102249 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a8a37d37f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.523176319 +0000 UTC m=+8.215840947,LastTimestamp:2025-08-13 19:44:01.523176319 +0000 UTC m=+8.215840947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.108244 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a8bfdc49b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.552925851 +0000 UTC m=+8.245590579,LastTimestamp:2025-08-13 19:44:01.552925851 +0000 UTC m=+8.245590579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.115351 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a8c18b55e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.554691422 +0000 UTC m=+8.247356050,LastTimestamp:2025-08-13 19:44:01.554691422 +0000 UTC m=+8.247356050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.121877 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1a8c2871a0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.555722656 +0000 UTC m=+8.248387694,LastTimestamp:2025-08-13 19:44:01.555722656 +0000 UTC m=+8.248387694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.129240 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1ae43f56b0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.033618096 +0000 UTC m=+9.726282814,LastTimestamp:2025-08-13 19:44:03.033618096 +0000 UTC m=+9.726282814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.135255 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1ae71d62bb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.081724603 +0000 UTC m=+9.774389431,LastTimestamp:2025-08-13 19:44:03.081724603 +0000 UTC m=+9.774389431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.142020 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1aeee82a72 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.212454514 +0000 UTC m=+9.905119352,LastTimestamp:2025-08-13 19:44:03.212454514 +0000 UTC m=+9.905119352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.147335 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1aefb94b8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.226160014 +0000 UTC m=+9.918824642,LastTimestamp:2025-08-13 19:44:03.226160014 +0000 UTC m=+9.918824642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.153455 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1af0961313 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.240629011 +0000 UTC m=+9.933296709,LastTimestamp:2025-08-13 19:44:03.240629011 +0000 UTC m=+9.933296709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.159561 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1af3f4aa7b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.297159803 +0000 UTC m=+9.989824671,LastTimestamp:2025-08-13 19:44:03.297159803 +0000 UTC m=+9.989824671,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.165738 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b08a0a410 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.643974672 +0000 UTC m=+10.336639400,LastTimestamp:2025-08-13 19:44:03.643974672 +0000 UTC m=+10.336639400,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.172647 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1b09844dfa openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.658894842 +0000 UTC m=+10.351559570,LastTimestamp:2025-08-13 19:44:03.658894842 +0000 UTC m=+10.351559570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.179930 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b4a743788 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.74835956 +0000 UTC m=+11.441025118,LastTimestamp:2025-08-13 19:44:04.74835956 +0000 UTC m=+11.441025118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.181609 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b4a769be8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.748516328 +0000 UTC m=+11.441181476,LastTimestamp:2025-08-13 19:44:04.748516328 +0000 UTC m=+11.441181476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.188466 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b4f78ce68 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.832546408 +0000 UTC m=+11.525211506,LastTimestamp:2025-08-13 19:44:04.832546408 +0000 UTC m=+11.525211506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.193493 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b4f9e7370 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.835013488 +0000 UTC m=+11.527678176,LastTimestamp:2025-08-13 19:44:04.835013488 +0000 UTC m=+11.527678176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.198940 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b5384199a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.900395418 +0000 UTC m=+11.593060046,LastTimestamp:2025-08-13 19:44:04.900395418 +0000 UTC m=+11.593060046,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.205056 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b53c35bb7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.904541111 +0000 UTC m=+11.597206259,LastTimestamp:2025-08-13 19:44:04.904541111 +0000 UTC m=+11.597206259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.211243 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b891abecf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:05.799460559 +0000 UTC m=+12.492125337,LastTimestamp:2025-08-13 19:44:05.799460559 +0000 UTC m=+12.492125337,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.216698 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b89221cd6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:05.799943382 +0000 UTC m=+12.492608170,LastTimestamp:2025-08-13 19:44:05.799943382 +0000 UTC m=+12.492608170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.222906 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b8d621d7a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:05.871246714 +0000 UTC m=+12.563911562,LastTimestamp:2025-08-13 19:44:05.871246714 +0000 UTC m=+12.563911562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.228245 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b9004b8dd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:05.915457757 +0000 UTC m=+12.608122415,LastTimestamp:2025-08-13 19:44:05.915457757 +0000 UTC m=+12.608122415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.233893 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b9025a162 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:05.917614434 +0000 UTC m=+12.610279142,LastTimestamp:2025-08-13 19:44:05.917614434 +0000 UTC m=+12.610279142,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.239673 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1bdc2e4fe5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:07.193251813 +0000 UTC m=+13.885916601,LastTimestamp:2025-08-13 19:44:07.193251813 +0000 UTC m=+13.885916601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.244436 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1be6038a15 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:07.358220821 +0000 UTC m=+14.050885539,LastTimestamp:2025-08-13 19:44:07.358220821 +0000 UTC m=+14.050885539,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.250241 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1be637912f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:07.361630511 +0000 UTC m=+14.054295269,LastTimestamp:2025-08-13 19:44:07.361630511 +0000 UTC m=+14.054295269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.256487 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1c0fd99e9b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:08.060116635 +0000 UTC m=+14.752781353,LastTimestamp:2025-08-13 19:44:08.060116635 +0000 UTC m=+14.752781353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.261845 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1c1834ac80 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:08.200301696 +0000 UTC m=+14.892966424,LastTimestamp:2025-08-13 19:44:08.200301696 +0000 UTC m=+14.892966424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.268266 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d6149ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582078975 +0000 UTC m=+19.274743833,LastTimestamp:2025-08-13 19:44:12.582078975 +0000 UTC m=+19.274743833,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.270099 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.273193 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d63bae5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582238949 +0000 UTC m=+19.274903587,LastTimestamp:2025-08-13 19:44:12.582238949 +0000 UTC m=+19.274903587,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.279406 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-apiserver-crc.185b6b1f1d51d0e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/healthz": context deadline exceeded Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:21.170999522 +0000 UTC m=+27.863664511,LastTimestamp:2025-08-13 19:44:21.170999522 +0000 UTC m=+27.863664511,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.285865 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1f1d52c4f4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:21.171062004 +0000 UTC m=+27.863726712,LastTimestamp:2025-08-13 19:44:21.171062004 +0000 UTC m=+27.863726712,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.291293 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-apiserver-crc.185b6b1f6837ed20 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:44570->192.168.126.11:17697: read: connection reset by peer Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:22.427594016 +0000 UTC m=+29.120259044,LastTimestamp:2025-08-13 19:44:22.427594016 +0000 UTC m=+29.120259044,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.296244 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1f6838c787 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44570->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:22.427649927 +0000 UTC m=+29.120314995,LastTimestamp:2025-08-13 19:44:22.427649927 +0000 UTC m=+29.120314995,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.300958 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-apiserver-crc.185b6b1f6ea889af openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Aug 13 19:49:31 crc kubenswrapper[4183]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Aug 13 19:49:31 crc kubenswrapper[4183]: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:22.535637423 +0000 UTC m=+29.228302151,LastTimestamp:2025-08-13 19:44:22.535637423 +0000 UTC m=+29.228302151,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.305822 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1f6eaa6926 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:22.535760166 +0000 UTC m=+29.228424934,LastTimestamp:2025-08-13 19:44:22.535760166 +0000 UTC m=+29.228424934,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.311049 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b1d1d6149ff\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d6149ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582078975 +0000 UTC m=+19.274743833,LastTimestamp:2025-08-13 19:44:22.581770237 +0000 UTC m=+29.274586219,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.315857 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b1d1d63bae5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d63bae5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582238949 +0000 UTC m=+19.274903587,LastTimestamp:2025-08-13 19:44:22.582142917 +0000 UTC m=+29.274807915,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.321366 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.185b6b1b53c35bb7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b53c35bb7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.904541111 +0000 UTC m=+11.597206259,LastTimestamp:2025-08-13 19:44:22.890986821 +0000 UTC m=+29.583651619,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.328168 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-controller-manager-crc.185b6b21364a25ab openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": read tcp 192.168.126.11:58646->192.168.126.11:10357: read: connection reset by peer Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:30.179861931 +0000 UTC m=+36.872527479,LastTimestamp:2025-08-13 19:44:30.179861931 +0000 UTC m=+36.872527479,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.333579 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b21364b662f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:58646->192.168.126.11:10357: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:30.179943983 +0000 UTC m=+36.872609101,LastTimestamp:2025-08-13 19:44:30.179943983 +0000 UTC m=+36.872609101,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.338449 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b2136ee1b84 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:30.190607236 +0000 UTC m=+36.883273024,LastTimestamp:2025-08-13 19:44:30.190607236 +0000 UTC m=+36.883273024,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.343715 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b19a0082eef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19a0082eef openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.594185455 +0000 UTC m=+4.286850313,LastTimestamp:2025-08-13 19:44:30.265237637 +0000 UTC m=+36.957902255,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.349819 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b19b35fe1a6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19b35fe1a6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.918699942 +0000 UTC m=+4.611364680,LastTimestamp:2025-08-13 19:44:30.560420379 +0000 UTC m=+37.253085177,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.354916 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b19ba50d163\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19ba50d163 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.035153251 +0000 UTC m=+4.727818009,LastTimestamp:2025-08-13 19:44:30.600329758 +0000 UTC m=+37.292994536,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.361362 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b1d1d6149ff\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d6149ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582078975 +0000 UTC m=+19.274743833,LastTimestamp:2025-08-13 19:44:42.58231867 +0000 UTC m=+49.274983458,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.368279 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b1d1d63bae5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d63bae5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582238949 +0000 UTC m=+19.274903587,LastTimestamp:2025-08-13 19:44:42.583111371 +0000 UTC m=+49.275776039,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.377404 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b1d1d6149ff\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d6149ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582078975 +0000 UTC m=+19.274743833,LastTimestamp:2025-08-13 19:44:52.581706322 +0000 UTC m=+59.274371120,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: I0813 19:49:31.512119 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.209040 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.210739 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.210975 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.211129 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:32 crc kubenswrapper[4183]: E0813 19:49:32.270105 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.510493 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:32 crc kubenswrapper[4183]: E0813 19:49:32.547606 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.581299 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.581414 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.835071 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.836842 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.836921 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.836946 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.836979 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:32 crc kubenswrapper[4183]: E0813 19:49:32.842913 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.208703 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.209917 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.209984 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.209999 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.211385 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:49:33 crc kubenswrapper[4183]: E0813 19:49:33.213154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:49:33 crc kubenswrapper[4183]: E0813 19:49:33.270083 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.513266 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:34 crc kubenswrapper[4183]: E0813 19:49:34.270501 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:34 crc kubenswrapper[4183]: E0813 19:49:34.290122 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:34 crc kubenswrapper[4183]: I0813 19:49:34.511654 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:35 crc kubenswrapper[4183]: E0813 19:49:35.269914 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:35 crc kubenswrapper[4183]: E0813 19:49:35.432366 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:35 crc kubenswrapper[4183]: I0813 19:49:35.509201 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:36 crc kubenswrapper[4183]: E0813 19:49:36.270729 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:36 crc kubenswrapper[4183]: I0813 19:49:36.510235 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:37 crc kubenswrapper[4183]: E0813 19:49:37.270369 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:37 crc kubenswrapper[4183]: I0813 19:49:37.511214 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:37 crc kubenswrapper[4183]: W0813 19:49:37.988112 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:37 crc kubenswrapper[4183]: E0813 19:49:37.988181 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:38 crc kubenswrapper[4183]: E0813 19:49:38.270227 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:38 crc kubenswrapper[4183]: I0813 19:49:38.516757 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:39 crc kubenswrapper[4183]: E0813 19:49:39.270570 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.509832 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:39 crc kubenswrapper[4183]: E0813 19:49:39.555643 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.587743 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.588049 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.589302 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.589501 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.589547 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.594720 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.843881 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.845419 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.845608 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.845727 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.845971 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:39 crc kubenswrapper[4183]: E0813 19:49:39.853543 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Aug 13 19:49:40 crc kubenswrapper[4183]: I0813 19:49:40.219245 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:40 crc kubenswrapper[4183]: I0813 19:49:40.220210 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:40 crc kubenswrapper[4183]: I0813 19:49:40.220264 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:40 crc kubenswrapper[4183]: I0813 19:49:40.220283 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:40 crc kubenswrapper[4183]: E0813 19:49:40.270720 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:40 crc kubenswrapper[4183]: I0813 19:49:40.513496 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:41 crc kubenswrapper[4183]: E0813 19:49:41.270528 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:41 crc kubenswrapper[4183]: I0813 19:49:41.511039 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:41 crc kubenswrapper[4183]: W0813 19:49:41.624712 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Aug 13 19:49:41 crc kubenswrapper[4183]: E0813 19:49:41.624885 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Aug 13 19:49:42 crc kubenswrapper[4183]: E0813 19:49:42.270521 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:42 crc kubenswrapper[4183]: I0813 19:49:42.510642 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:43 crc kubenswrapper[4183]: E0813 19:49:43.270599 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:43 crc kubenswrapper[4183]: I0813 19:49:43.510273 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:44 crc kubenswrapper[4183]: E0813 19:49:44.270172 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:44 crc kubenswrapper[4183]: E0813 19:49:44.291062 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:44 crc kubenswrapper[4183]: I0813 19:49:44.510192 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:45 crc kubenswrapper[4183]: E0813 19:49:45.270530 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:45 crc kubenswrapper[4183]: E0813 19:49:45.432637 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:45 crc kubenswrapper[4183]: I0813 19:49:45.518078 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:46 crc kubenswrapper[4183]: E0813 19:49:46.270379 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.509589 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:46 crc kubenswrapper[4183]: E0813 19:49:46.562571 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.854766 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.856664 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.856751 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.856820 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.856861 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:46 crc kubenswrapper[4183]: E0813 19:49:46.862298 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.208883 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.210220 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.210505 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.210528 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.211829 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:49:47 crc kubenswrapper[4183]: E0813 19:49:47.212249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:49:47 crc kubenswrapper[4183]: E0813 19:49:47.270192 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.509999 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:48 crc kubenswrapper[4183]: E0813 19:49:48.270137 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:48 crc kubenswrapper[4183]: I0813 19:49:48.510012 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:49 crc kubenswrapper[4183]: E0813 19:49:49.270426 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:49 crc kubenswrapper[4183]: I0813 19:49:49.515265 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:50 crc kubenswrapper[4183]: E0813 19:49:50.271060 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:50 crc kubenswrapper[4183]: I0813 19:49:50.511214 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:51 crc kubenswrapper[4183]: W0813 19:49:51.139011 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Aug 13 19:49:51 crc kubenswrapper[4183]: E0813 19:49:51.139082 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Aug 13 19:49:51 crc kubenswrapper[4183]: E0813 19:49:51.270920 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:51 crc kubenswrapper[4183]: I0813 19:49:51.512307 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:52 crc kubenswrapper[4183]: E0813 19:49:52.270037 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:52 crc kubenswrapper[4183]: I0813 19:49:52.510453 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:53 crc kubenswrapper[4183]: E0813 19:49:53.269932 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.510636 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:53 crc kubenswrapper[4183]: E0813 19:49:53.569575 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.862843 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.864560 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.864628 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.864650 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.864681 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:53 crc kubenswrapper[4183]: E0813 19:49:53.870484 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Aug 13 19:49:54 crc kubenswrapper[4183]: E0813 19:49:54.269682 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:54 crc kubenswrapper[4183]: E0813 19:49:54.291339 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.512971 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.663943 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.664078 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.664111 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.664141 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.664185 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.969279 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.989173 4183 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:49:55 crc kubenswrapper[4183]: E0813 19:49:55.270095 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:55 crc kubenswrapper[4183]: E0813 19:49:55.433830 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:55 crc kubenswrapper[4183]: I0813 19:49:55.510264 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:56 crc kubenswrapper[4183]: E0813 19:49:56.270142 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:56 crc kubenswrapper[4183]: I0813 19:49:56.506012 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:57 crc kubenswrapper[4183]: E0813 19:49:57.269926 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:57 crc kubenswrapper[4183]: I0813 19:49:57.541656 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:57 crc kubenswrapper[4183]: W0813 19:49:57.811287 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Aug 13 19:49:57 crc kubenswrapper[4183]: E0813 19:49:57.811355 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Aug 13 19:49:58 crc kubenswrapper[4183]: E0813 19:49:58.271088 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:58 crc kubenswrapper[4183]: I0813 19:49:58.513900 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:59 crc kubenswrapper[4183]: E0813 19:49:59.269943 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:59 crc kubenswrapper[4183]: I0813 19:49:59.519147 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:59 crc kubenswrapper[4183]: I0813 19:49:59.759430 4183 csr.go:261] certificate signing request csr-lhhqv is approved, waiting to be issued Aug 13 19:49:59 crc kubenswrapper[4183]: I0813 19:49:59.783983 4183 csr.go:257] certificate signing request csr-lhhqv is issued Aug 13 19:49:59 crc kubenswrapper[4183]: I0813 19:49:59.877575 4183 reconstruct_new.go:210] "DevicePaths of reconstructed volumes updated" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.270621 4183 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.785669 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-03-25 02:29:24.474296861 +0000 UTC Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.786022 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 5358h39m23.688281563s for next certificate rotation Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.870735 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.875250 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.875388 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.875411 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.875534 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.042192 4183 kubelet_node_status.go:116] "Node was previously registered" node="crc" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.042571 4183 kubelet_node_status.go:80] "Successfully registered node" node="crc" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.047273 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.047373 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.047388 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.047410 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.047664 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:01Z","lastTransitionTime":"2025-08-13T19:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.081841 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.089710 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.089845 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.089866 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.089888 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.089919 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:01Z","lastTransitionTime":"2025-08-13T19:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.111413 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.122042 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.122164 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.122222 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.122252 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.122285 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:01Z","lastTransitionTime":"2025-08-13T19:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.138858 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.149109 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.149201 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.149228 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.149255 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.149326 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:01Z","lastTransitionTime":"2025-08-13T19:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.167689 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.192306 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.192458 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.192483 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.192513 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.192549 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:01Z","lastTransitionTime":"2025-08-13T19:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.205447 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.205512 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.205543 4183 kubelet_node_status.go:512] "Error getting the current node from lister" err="node \"crc\" not found" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.208655 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.210144 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.210216 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.210234 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.211710 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.212117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.305759 4183 kubelet_node_status.go:506] "Node not becoming ready in time after startup" Aug 13 19:50:05 crc kubenswrapper[4183]: E0813 19:50:05.313867 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:05 crc kubenswrapper[4183]: E0813 19:50:05.434581 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:50:10 crc kubenswrapper[4183]: E0813 19:50:10.316000 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:10 crc kubenswrapper[4183]: I0813 19:50:10.885620 4183 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.212015 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.212090 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.212107 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.212125 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.212160 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:11Z","lastTransitionTime":"2025-08-13T19:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.223490 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.228245 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.228330 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.228348 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.228367 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.228396 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:11Z","lastTransitionTime":"2025-08-13T19:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.239346 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.244231 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.244548 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.244689 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.244966 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.245102 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:11Z","lastTransitionTime":"2025-08-13T19:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.257632 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.263600 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.263666 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.263688 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.263712 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.263741 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:11Z","lastTransitionTime":"2025-08-13T19:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.275510 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.281195 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.281302 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.281566 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.281599 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.281625 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:11Z","lastTransitionTime":"2025-08-13T19:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.294314 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.294375 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:13 crc kubenswrapper[4183]: I0813 19:50:13.208952 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:13 crc kubenswrapper[4183]: I0813 19:50:13.210507 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:13 crc kubenswrapper[4183]: I0813 19:50:13.210688 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:13 crc kubenswrapper[4183]: I0813 19:50:13.210736 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:13 crc kubenswrapper[4183]: I0813 19:50:13.212190 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.208746 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.211445 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.211521 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.211539 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.333580 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.337214 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92"} Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.337372 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.338387 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.338495 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.338517 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:15 crc kubenswrapper[4183]: E0813 19:50:15.318135 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:15 crc kubenswrapper[4183]: E0813 19:50:15.435056 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:50:17 crc kubenswrapper[4183]: I0813 19:50:17.564190 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:50:17 crc kubenswrapper[4183]: I0813 19:50:17.564518 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:17 crc kubenswrapper[4183]: I0813 19:50:17.566442 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:17 crc kubenswrapper[4183]: I0813 19:50:17.566636 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:17 crc kubenswrapper[4183]: I0813 19:50:17.566657 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:20 crc kubenswrapper[4183]: E0813 19:50:20.321676 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.421167 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.421256 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.421273 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.421303 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.421367 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:21Z","lastTransitionTime":"2025-08-13T19:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.613232 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.621172 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.621510 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.621647 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.621849 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.621979 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:21Z","lastTransitionTime":"2025-08-13T19:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.635751 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.641260 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.641422 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.641531 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.641679 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.641904 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:21Z","lastTransitionTime":"2025-08-13T19:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.655538 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.661330 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.661382 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.661451 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.661876 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.661905 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:21Z","lastTransitionTime":"2025-08-13T19:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.675383 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.681015 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.681072 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.681086 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.681105 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.681127 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:21Z","lastTransitionTime":"2025-08-13T19:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.695490 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.695561 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.992377 4183 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:50:25 crc kubenswrapper[4183]: E0813 19:50:25.324011 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:25 crc kubenswrapper[4183]: E0813 19:50:25.436171 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:50:27 crc kubenswrapper[4183]: I0813 19:50:27.570672 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:50:27 crc kubenswrapper[4183]: I0813 19:50:27.571207 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:27 crc kubenswrapper[4183]: I0813 19:50:27.573151 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:27 crc kubenswrapper[4183]: I0813 19:50:27.573306 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:27 crc kubenswrapper[4183]: I0813 19:50:27.573342 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:29 crc kubenswrapper[4183]: I0813 19:50:29.245026 4183 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:50:30 crc kubenswrapper[4183]: E0813 19:50:30.326466 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.814172 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.814215 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.814231 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.814253 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.814288 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:31Z","lastTransitionTime":"2025-08-13T19:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.829151 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.835325 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.835378 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.835394 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.835413 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.835434 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:31Z","lastTransitionTime":"2025-08-13T19:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.847619 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.853860 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.854067 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.854174 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.854270 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.854367 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:31Z","lastTransitionTime":"2025-08-13T19:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.868884 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.877119 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.877197 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.877216 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.877241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.877280 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:31Z","lastTransitionTime":"2025-08-13T19:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.891400 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.896583 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.896662 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.896679 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.896700 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.896724 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:31Z","lastTransitionTime":"2025-08-13T19:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.909018 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.909106 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:35 crc kubenswrapper[4183]: E0813 19:50:35.328375 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:35 crc kubenswrapper[4183]: E0813 19:50:35.437419 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.517928 4183 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.756603 4183 apiserver.go:52] "Watching apiserver" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.776022 4183 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.778291 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7","openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw","openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7","openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-machine-config-operator/machine-config-daemon-zpnhg","openshift-marketplace/certified-operators-7287f","openshift-network-node-identity/network-node-identity-7xghp","openshift-network-operator/network-operator-767c585db5-zd56b","openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh","openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b","openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb","openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz","openshift-etcd-operator/etcd-operator-768d5b5d86-722mg","openshift-ingress/router-default-5c9bf7bc58-6jctv","openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh","openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm","openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m","openshift-authentication/oauth-openshift-765b47f944-n2lhl","openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z","openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-apiserver/apiserver-67cbf64bc9-mtx25","openshift-machine-config-operator/machine-config-server-v65wr","openshift-marketplace/redhat-operators-f4jkp","openshift-dns/dns-default-gbw49","openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd","openshift-dns-operator/dns-operator-75f687757b-nz2xb","openshift-image-registry/image-registry-585546dd8b-v5m4t","openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv","openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc","openshift-multus/multus-admission-controller-6c7c885997-4hbbc","openshift-multus/network-metrics-daemon-qdfr4","openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf","openshift-ovn-kubernetes/ovnkube-node-44qcg","openshift-kube-controller-manager/revision-pruner-8-crc","openshift-image-registry/node-ca-l92hr","openshift-network-operator/iptables-alerter-wwpnd","openshift-service-ca/service-ca-666f99b6f-vlbxv","openshift-console/console-84fccc7b6-mkncc","openshift-controller-manager/controller-manager-6ff78978b4-q4vv8","openshift-marketplace/community-operators-8jhz6","hostpath-provisioner/csi-hostpathplugin-hvm8g","openshift-console/downloads-65476884b9-9wcvx","openshift-marketplace/marketplace-operator-8b455464d-f9xdt","openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5","openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz","openshift-console-operator/console-conversion-webhook-595f9969b-l6z49","openshift-dns/node-resolver-dn27q","openshift-ingress-canary/ingress-canary-2vhcn","openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7","openshift-multus/multus-additional-cni-plugins-bzj2p","openshift-multus/multus-q88th","openshift-network-diagnostics/network-check-target-v54bt","openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9","openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg","openshift-etcd/etcd-crc","openshift-marketplace/redhat-marketplace-8s8pc","openshift-marketplace/redhat-marketplace-rmwfn","openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr","openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2","openshift-console-operator/console-operator-5dbbc74dc9-cp5cd"] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.778476 4183 topology_manager.go:215] "Topology Admit Handler" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" podNamespace="openshift-etcd-operator" podName="etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.778870 4183 topology_manager.go:215] "Topology Admit Handler" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" podNamespace="openshift-marketplace" podName="marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.778954 4183 topology_manager.go:215] "Topology Admit Handler" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" podNamespace="openshift-machine-config-operator" podName="machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779016 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" podNamespace="openshift-service-ca-operator" podName="service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779108 4183 topology_manager.go:215] "Topology Admit Handler" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" podNamespace="openshift-operator-lifecycle-manager" podName="catalog-operator-857456c46-7f5wf" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779177 4183 topology_manager.go:215] "Topology Admit Handler" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" podNamespace="openshift-operator-lifecycle-manager" podName="package-server-manager-84d578d794-jw7r2" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779239 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" podNamespace="openshift-kube-apiserver-operator" podName="kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.779620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779463 4183 topology_manager.go:215] "Topology Admit Handler" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" podNamespace="openshift-machine-api" podName="machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.779660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.779873 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.780119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.780162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.780226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.780280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.780288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.780352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.780375 4183 topology_manager.go:215] "Topology Admit Handler" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" podNamespace="openshift-network-operator" podName="network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.780584 4183 topology_manager.go:215] "Topology Admit Handler" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" podNamespace="openshift-operator-lifecycle-manager" podName="olm-operator-6d8474f75f-x54mh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.780935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.781189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.781417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.781191 4183 topology_manager.go:215] "Topology Admit Handler" podUID="71af81a9-7d43-49b2-9287-c375900aa905" podNamespace="openshift-kube-scheduler-operator" podName="openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.781258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.781953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.782210 4183 topology_manager.go:215] "Topology Admit Handler" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" podNamespace="openshift-kube-controller-manager-operator" podName="kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.782325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.782757 4183 topology_manager.go:215] "Topology Admit Handler" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" podNamespace="openshift-kube-storage-version-migrator-operator" podName="kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.783099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.783203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.783420 4183 topology_manager.go:215] "Topology Admit Handler" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" podNamespace="openshift-machine-api" podName="control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.783663 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" podNamespace="openshift-authentication-operator" podName="authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.784040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.784116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.784228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.784384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.784462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.784737 4183 topology_manager.go:215] "Topology Admit Handler" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" podNamespace="openshift-config-operator" podName="openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.785160 4183 topology_manager.go:215] "Topology Admit Handler" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" podNamespace="openshift-apiserver-operator" podName="openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.785318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.785639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.785336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.785713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.786231 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.786269 4183 topology_manager.go:215] "Topology Admit Handler" podUID="10603adc-d495-423c-9459-4caa405960bb" podNamespace="openshift-dns-operator" podName="dns-operator-75f687757b-nz2xb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.786671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.786963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.787040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.786749 4183 topology_manager.go:215] "Topology Admit Handler" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" podNamespace="openshift-controller-manager-operator" podName="openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787193 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" podNamespace="openshift-image-registry" podName="cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.787327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.787564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787367 4183 topology_manager.go:215] "Topology Admit Handler" podUID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" podNamespace="openshift-multus" podName="multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787875 4183 topology_manager.go:215] "Topology Admit Handler" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" podNamespace="openshift-multus" podName="multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.788101 4183 topology_manager.go:215] "Topology Admit Handler" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" podNamespace="openshift-multus" podName="network-metrics-daemon-qdfr4" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.788490 4183 topology_manager.go:215] "Topology Admit Handler" podUID="410cf605-1970-4691-9c95-53fdc123b1f3" podNamespace="openshift-ovn-kubernetes" podName="ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.788736 4183 topology_manager.go:215] "Topology Admit Handler" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" podNamespace="openshift-network-diagnostics" podName="network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.790616 4183 topology_manager.go:215] "Topology Admit Handler" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" podNamespace="openshift-network-diagnostics" podName="network-check-target-v54bt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.791383 4183 topology_manager.go:215] "Topology Admit Handler" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" podNamespace="openshift-network-node-identity" podName="network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.792040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.792215 4183 topology_manager.go:215] "Topology Admit Handler" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" podNamespace="openshift-ovn-kubernetes" podName="ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.792420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.798866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.793065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.799077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787459 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.793268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.799527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.794364 4183 topology_manager.go:215] "Topology Admit Handler" podUID="2b6d14a5-ca00-40c7-af7a-051a98a24eed" podNamespace="openshift-network-operator" podName="iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.800281 4183 topology_manager.go:215] "Topology Admit Handler" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" podNamespace="openshift-kube-storage-version-migrator" podName="migrator-f7c6d88df-q2fnv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.794555 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.794555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.794644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.794704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.795116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.811906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.812489 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.812676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.813056 4183 topology_manager.go:215] "Topology Admit Handler" podUID="378552fd-5e53-4882-87ff-95f3d9198861" podNamespace="openshift-service-ca" podName="service-ca-666f99b6f-vlbxv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.813646 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.813888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.814482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.814668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.813932 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.816490 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.816766 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.820457 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.820702 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.821071 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.821437 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.821974 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.822161 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.822350 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.814377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.823768 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6a23c0ee-5648-448c-b772-83dced2891ce" podNamespace="openshift-dns" podName="node-resolver-dn27q" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.823996 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824160 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824236 4183 topology_manager.go:215] "Topology Admit Handler" podUID="13045510-8717-4a71-ade4-be95a76440a7" podNamespace="openshift-dns" podName="dns-default-gbw49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824337 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824564 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824876 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824900 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825235 4183 topology_manager.go:215] "Topology Admit Handler" podUID="9fb762d1-812f-43f1-9eac-68034c1ecec7" podNamespace="openshift-cluster-version" podName="cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.825452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825508 4183 topology_manager.go:215] "Topology Admit Handler" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" podNamespace="openshift-oauth-apiserver" podName="apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.826256 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" podNamespace="openshift-operator-lifecycle-manager" podName="packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.826588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.826923 4183 topology_manager.go:215] "Topology Admit Handler" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" podNamespace="openshift-ingress-operator" podName="ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.827276 4183 topology_manager.go:215] "Topology Admit Handler" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" podNamespace="openshift-cluster-samples-operator" podName="cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.827581 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" podNamespace="openshift-cluster-machine-approver" podName="machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.827734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.827954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.828020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.828070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.828349 4183 topology_manager.go:215] "Topology Admit Handler" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" podNamespace="openshift-ingress" podName="router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.828484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.828586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.828739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.829150 4183 topology_manager.go:215] "Topology Admit Handler" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" podNamespace="openshift-machine-config-operator" podName="machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.829735 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.829931 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.830195 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.829292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.830220 4183 topology_manager.go:215] "Topology Admit Handler" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" podNamespace="openshift-console-operator" podName="console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.829370 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.830751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.831073 4183 topology_manager.go:215] "Topology Admit Handler" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" podNamespace="openshift-console-operator" podName="console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.831430 4183 topology_manager.go:215] "Topology Admit Handler" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" podNamespace="openshift-machine-config-operator" podName="machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.831593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.831702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.831130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.831956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.832299 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6268b7fe-8910-4505-b404-6f1df638105c" podNamespace="openshift-console" podName="downloads-65476884b9-9wcvx" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.832628 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bf1a8b70-3856-486f-9912-a2de1d57c3fb" podNamespace="openshift-machine-config-operator" podName="machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.832763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.832975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.832721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.833167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.833551 4183 topology_manager.go:215] "Topology Admit Handler" podUID="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" podNamespace="openshift-image-registry" podName="node-ca-l92hr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.834086 4183 topology_manager.go:215] "Topology Admit Handler" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" podNamespace="openshift-ingress-canary" podName="ingress-canary-2vhcn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.834287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.834596 4183 topology_manager.go:215] "Topology Admit Handler" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" podNamespace="openshift-multus" podName="multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.834759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.835213 4183 topology_manager.go:215] "Topology Admit Handler" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" podNamespace="hostpath-provisioner" podName="csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.835384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.835477 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.835878 4183 topology_manager.go:215] "Topology Admit Handler" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" podNamespace="openshift-image-registry" podName="image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.836253 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" podNamespace="openshift-console" podName="console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.836600 4183 topology_manager.go:215] "Topology Admit Handler" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.837070 4183 topology_manager.go:215] "Topology Admit Handler" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" podNamespace="openshift-apiserver" podName="apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.837483 4183 topology_manager.go:215] "Topology Admit Handler" podUID="13ad7555-5f28-4555-a563-892713a8433a" podNamespace="openshift-authentication" podName="oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.837759 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838100 4183 topology_manager.go:215] "Topology Admit Handler" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" podNamespace="openshift-controller-manager" podName="controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838255 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838630 4183 topology_manager.go:215] "Topology Admit Handler" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" podNamespace="openshift-marketplace" podName="certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838756 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.839190 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.839427 4183 topology_manager.go:215] "Topology Admit Handler" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" podNamespace="openshift-marketplace" podName="community-operators-8jhz6" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.839941 4183 topology_manager.go:215] "Topology Admit Handler" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" podNamespace="openshift-marketplace" podName="redhat-operators-f4jkp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.840167 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.840286 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838200 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.839898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.840548 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.840698 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.841006 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.841257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.841374 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.841606 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.841929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842007 4183 topology_manager.go:215] "Topology Admit Handler" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" podNamespace="openshift-marketplace" podName="redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.842322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.842442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842497 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.842525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842616 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842723 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.843163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.843357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838713 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.843931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.844006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842107 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.840689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.844632 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.844737 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.844653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845071 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845436 4183 topology_manager.go:215] "Topology Admit Handler" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" podNamespace="openshift-marketplace" podName="redhat-marketplace-rmwfn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845496 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.845887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.845968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845355 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.846052 4183 topology_manager.go:215] "Topology Admit Handler" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-8-crc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.846134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.846372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.846376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.846471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.846498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.846621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.863009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.880047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.897584 4183 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.898734 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.898908 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.898940 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.898972 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.898996 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899018 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899045 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899068 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899090 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899118 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899139 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-c2f8t\" (UniqueName: \"kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899164 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899188 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899219 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899248 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899303 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8svnk\" (UniqueName: \"kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899328 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899380 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899401 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899428 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7jw8\" (UniqueName: \"kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899452 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-khtlk\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899509 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899535 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899572 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899604 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899632 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899654 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899682 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899711 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899732 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899762 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899864 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899897 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899924 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899949 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899976 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900006 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900032 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900057 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900080 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900109 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j4qn7\" (UniqueName: \"kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900131 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900162 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-cx4f9\" (UniqueName: \"kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900190 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900264 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900291 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900319 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900518 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900540 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900565 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900596 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900626 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900650 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900681 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900703 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-xkzjk\" (UniqueName: \"kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900747 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900898 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.901044 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.901077 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.901101 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.901287 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.901399 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bwbqm\" (UniqueName: \"kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.901756 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902158 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902212 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902295 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902424 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.402238295 +0000 UTC m=+407.094902923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.902466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902508 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.402486162 +0000 UTC m=+407.095150820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.902630 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.902742 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.903086 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.903454 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.903594 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.903871 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.903991 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.904087 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.904148 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.904321 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.904399 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.905258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.905873 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.906056 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.906263 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.906335 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.906408 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.906571 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.906756 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.909890 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.909949 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910096 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910154 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910190 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910233 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910374 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910411 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910435 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910462 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910486 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910511 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910537 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910570 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910602 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910625 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910648 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910730 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910765 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910854 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910881 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910925 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911024 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911233 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911270 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911294 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911316 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911337 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911358 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914042 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914068 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914106 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4qr9t\" (UniqueName: \"kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914135 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914158 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914182 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914207 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914233 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bwvjb\" (UniqueName: \"kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914275 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914300 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914331 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914354 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914386 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914415 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914453 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914479 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-gsxd9\" (UniqueName: \"kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914509 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914535 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914565 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914593 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914616 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914642 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914756 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914902 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914948 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914974 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915003 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915030 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915053 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915083 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915114 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915137 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915162 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915186 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915209 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dtjml\" (UniqueName: \"kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915235 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915261 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915287 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915336 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915365 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.921592 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902543 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.922372 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.906261 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.927437 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.906577 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.927612 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.927649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.42762797 +0000 UTC m=+407.120292559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.927681 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.427661471 +0000 UTC m=+407.120326149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.927698 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.427690392 +0000 UTC m=+407.120354980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.927716 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.427709523 +0000 UTC m=+407.120374121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928118 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428093264 +0000 UTC m=+407.120757962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928223 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428211547 +0000 UTC m=+407.120876145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928321 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.42830978 +0000 UTC m=+407.120974448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928411 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428400983 +0000 UTC m=+407.121065581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928505 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428493755 +0000 UTC m=+407.121158343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928585 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428575108 +0000 UTC m=+407.121239696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928710 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428699471 +0000 UTC m=+407.121364059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928927 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428905857 +0000 UTC m=+407.121570575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.929153 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.429137544 +0000 UTC m=+407.121802252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.929261 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.429249537 +0000 UTC m=+407.121914125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.929361 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.42935045 +0000 UTC m=+407.122015058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.929458 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.429448262 +0000 UTC m=+407.122112851 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.930252 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.930440 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.930734 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.933915 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.934582 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935125 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.935163 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935222 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.435201617 +0000 UTC m=+407.127866355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935281 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935349 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.435329601 +0000 UTC m=+407.127994339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935391 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935440 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.435432574 +0000 UTC m=+407.128097302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935723 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935938 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.435926588 +0000 UTC m=+407.128591236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936011 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936088 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.436057161 +0000 UTC m=+407.128721889 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936130 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936164 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.436154044 +0000 UTC m=+407.128818772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936341 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936405 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.436395861 +0000 UTC m=+407.129060499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935288 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936593 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.436583686 +0000 UTC m=+407.129248304 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936642 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936682 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.436673389 +0000 UTC m=+407.129338017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.939937 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.940023 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.440010504 +0000 UTC m=+407.132675122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.940080 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.440068636 +0000 UTC m=+407.132733254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941101 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941239 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941346 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941449 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941547 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941642 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941769 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942016 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942135 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942229 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942318 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6z2n9\" (UniqueName: \"kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942424 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942515 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942606 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9x6dp\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942693 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.944945 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945063 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945152 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945302 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945403 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945402 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945480 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945534 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945566 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945592 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945618 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945650 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945675 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.946078 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.946226 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.446204751 +0000 UTC m=+407.138869529 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.946387 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.946506 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.44649172 +0000 UTC m=+407.139156338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.946609 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.946719 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.946963 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.947095 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947176 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947282 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947340 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.447387085 +0000 UTC m=+407.140051813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.947411 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.947611 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.947718 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.947923 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948031 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948181 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948302 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948408 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948506 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948604 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948696 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.948996 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949269 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949314 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949377 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949387 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949400 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.949887 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949954 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950021 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950070 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950128 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.950326 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950369 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950902 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950965 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.951022 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.951026 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.951061 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.951346 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.951364 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.951382 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.951577 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.952003 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.952069 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.954085 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.954197 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.954239 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.954630 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.955715 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.956034 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.956247 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.956378 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.956593 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.456561177 +0000 UTC m=+407.149225965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.956740 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.956761 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.957096 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.457077892 +0000 UTC m=+407.149742580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.957331 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.945240 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.957879 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.958726 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.958958 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.964044 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4sfhc\" (UniqueName: \"kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.964223 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.964344 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981001 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.480932494 +0000 UTC m=+407.173597112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981078 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481054117 +0000 UTC m=+407.173718835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981110 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481090728 +0000 UTC m=+407.173755326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981135 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481119049 +0000 UTC m=+407.173783647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981157 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.48114988 +0000 UTC m=+407.173814468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981179 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481170981 +0000 UTC m=+407.173835579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981201 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481193371 +0000 UTC m=+407.173858079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981221 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481211322 +0000 UTC m=+407.173876030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981240 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481230432 +0000 UTC m=+407.173895030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981264 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481256113 +0000 UTC m=+407.173920821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981281 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481274004 +0000 UTC m=+407.173938772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981302 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481293824 +0000 UTC m=+407.173958422 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981324 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481314785 +0000 UTC m=+407.173979383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981342 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481333585 +0000 UTC m=+407.173998283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981369 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481353386 +0000 UTC m=+407.174018164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.981390 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwbqm\" (UniqueName: \"kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481387827 +0000 UTC m=+407.174052615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981506 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.48148407 +0000 UTC m=+407.174148758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981531 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481523171 +0000 UTC m=+407.174187759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981540 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981553 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481543491 +0000 UTC m=+407.174208179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981582 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481574512 +0000 UTC m=+407.174239400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981604 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481596023 +0000 UTC m=+407.174260751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.981659 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.981704 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.946976 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981881 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981933 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481912432 +0000 UTC m=+407.174577170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981932 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947533 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981981 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.982001 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.982028 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.482009175 +0000 UTC m=+407.174673943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.982062 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.482044396 +0000 UTC m=+407.174709194 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982119 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982174 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982225 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982329 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982375 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982443 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982505 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982549 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982850 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982907 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982942 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982975 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983007 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983035 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983065 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983093 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983135 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983162 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rkkfv\" (UniqueName: \"kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983195 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983241 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983312 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983343 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983370 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983401 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983464 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947642 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.986704 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.986915 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.944961 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.988040 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2f8t\" (UniqueName: \"kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.988586 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.988727 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.988902 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.989071 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.989156 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.989230 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.989388 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.995425 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.995917 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.995939 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.995958 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.946154 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997174 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997195 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997217 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997338 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997418 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997432 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997579 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997622 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997640 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.997764 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vtgqn\" (UniqueName: \"kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.997890 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.997923 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.997966 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998053 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998099 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998155 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998191 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998234 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998279 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v45vm\" (UniqueName: \"kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998313 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998341 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998365 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998397 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998428 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998461 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998488 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998524 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-zjg2w\" (UniqueName: \"kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998598 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998631 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998658 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998689 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998718 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998751 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.003069 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.003535 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.003614 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.003114 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.003184 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.004121 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.004323 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.004493 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.004671 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.007460 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007519 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.507495343 +0000 UTC m=+407.200160031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012179 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.512153866 +0000 UTC m=+407.204818564 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012200 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.512192467 +0000 UTC m=+407.204857165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012222 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.512211868 +0000 UTC m=+407.204876586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012249 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.512235249 +0000 UTC m=+407.204900067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012277 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.512261349 +0000 UTC m=+407.204926047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012295 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.5122873 +0000 UTC m=+407.204951998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.012330 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.012374 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.012938 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.012985 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.013172 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.013296 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.013453 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.013574 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.013737 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.015081 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.015210 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.015408 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.015593 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.015740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.016143 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.016602 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.016871 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtjml\" (UniqueName: \"kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.018521 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx4f9\" (UniqueName: \"kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.019119 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4qn7\" (UniqueName: \"kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.020741 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8svnk\" (UniqueName: \"kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.021678 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.021971 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.022410 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.023291 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.023624 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.516762998 +0000 UTC m=+407.209427816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.023677 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.523662685 +0000 UTC m=+407.216327283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.026600 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwvjb\" (UniqueName: \"kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.027004 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.029345 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.029578 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.030344 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.031228 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.031653 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007909 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.038765 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007958 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007988 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.008075 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.008656 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.008885 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.009447 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.009483 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007588 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.032241 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.032259 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.032332 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.032367 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.032386 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.026884 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.523689636 +0000 UTC m=+407.216354334 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039489 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539465937 +0000 UTC m=+407.232130645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039511 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539502438 +0000 UTC m=+407.232167036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039528 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539521218 +0000 UTC m=+407.232185816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039549 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539537429 +0000 UTC m=+407.232202017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039565 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.53955768 +0000 UTC m=+407.232222278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039587 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.5395749 +0000 UTC m=+407.232239598 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039612 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539599681 +0000 UTC m=+407.232264279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039631 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539619691 +0000 UTC m=+407.232284279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039646 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539638172 +0000 UTC m=+407.232302830 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039663 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539657612 +0000 UTC m=+407.232322200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039679 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539671033 +0000 UTC m=+407.232335631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039692 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539686603 +0000 UTC m=+407.232351191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039709 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539703274 +0000 UTC m=+407.232367872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539721844 +0000 UTC m=+407.232386432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039749 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539743465 +0000 UTC m=+407.232408073 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039767 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539761385 +0000 UTC m=+407.232426093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039995 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539974411 +0000 UTC m=+407.232639019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040023 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540013773 +0000 UTC m=+407.232678481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040042 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540036533 +0000 UTC m=+407.232701121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040056 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540050534 +0000 UTC m=+407.232715122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.033427 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040101 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540094455 +0000 UTC m=+407.232759073 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.033626 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.033959 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.034014 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040177 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540170457 +0000 UTC m=+407.232835175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.034115 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040227 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540210798 +0000 UTC m=+407.232875486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.035141 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.035194 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040290 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.54027947 +0000 UTC m=+407.232944088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.035334 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040334 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540328002 +0000 UTC m=+407.232992620 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040459 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040473 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040510 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540499606 +0000 UTC m=+407.233164214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041073 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041122 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041137 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041219 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.541186746 +0000 UTC m=+407.233851444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041359 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041440 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041449 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041511 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.541499335 +0000 UTC m=+407.234164093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041594 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041611 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041626 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041689 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.54168032 +0000 UTC m=+407.234345138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.041931 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.042124 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.042173 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.542155684 +0000 UTC m=+407.234820392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007619 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.059388 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.559359415 +0000 UTC m=+407.252024053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.013478 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.060348 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.560333093 +0000 UTC m=+407.252997891 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.013312 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.058596 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061324 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.561312631 +0000 UTC m=+407.253977259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061504 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061342 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.059226 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.059265 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.059304 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061420 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061715 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061727 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061740 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.058708 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.062594 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.058765 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.064511 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066485 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066499 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.065179 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.065330 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.065490 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.065585 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066226 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.561543078 +0000 UTC m=+407.254207706 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066549 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.56653355 +0000 UTC m=+407.259198258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066566 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566558681 +0000 UTC m=+407.259223269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066583 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566576462 +0000 UTC m=+407.259241060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066599 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566593442 +0000 UTC m=+407.259258180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066615 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566609253 +0000 UTC m=+407.259273951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066631 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566623623 +0000 UTC m=+407.259288241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066650 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566642104 +0000 UTC m=+407.259306722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066664 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566658264 +0000 UTC m=+407.259322862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066679 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566674144 +0000 UTC m=+407.259338743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066693 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566687595 +0000 UTC m=+407.259352193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066755 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566750137 +0000 UTC m=+407.259414725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.069227 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.569214847 +0000 UTC m=+407.261879465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.069310 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.569295549 +0000 UTC m=+407.261960157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.069349 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.569342581 +0000 UTC m=+407.262007189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.069392 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.569385832 +0000 UTC m=+407.262050440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.069427 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.569421753 +0000 UTC m=+407.262086371 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070518 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070560 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070592 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070625 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.570616207 +0000 UTC m=+407.263280825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070711 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070725 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070733 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070759 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.570751461 +0000 UTC m=+407.263416079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.070879 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.071762 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.072578 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.073055 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.082900 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.083099 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.083416 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.583389592 +0000 UTC m=+407.276054330 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.073269 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.073889 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qr9t\" (UniqueName: \"kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.074001 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.075109 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.086579 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.083145 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.083009 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.085385 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7jw8\" (UniqueName: \"kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.085453 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.085571 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.085625 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.085662 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.085730 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtgqn\" (UniqueName: \"kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.085874 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.086334 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.086382 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089156 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089372 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089723 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.589694212 +0000 UTC m=+407.282359040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089740 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.090358 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089758 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089771 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.090892 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089855 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.090934 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089923 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.090975 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089957 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091011 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089966 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091109 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089981 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091145 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089996 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091210 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.086652 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091255 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.090673 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.59065947 +0000 UTC m=+407.283324208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091317 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591298398 +0000 UTC m=+407.283962996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091333 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591327129 +0000 UTC m=+407.283991717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091352 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.59134372 +0000 UTC m=+407.284008428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091365 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.59135985 +0000 UTC m=+407.284024438 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091380 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.59137304 +0000 UTC m=+407.284037638 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091396 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591390511 +0000 UTC m=+407.284055219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091411 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591405241 +0000 UTC m=+407.284069839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091426 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591419782 +0000 UTC m=+407.284084370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091442 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591433582 +0000 UTC m=+407.284098170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.093285 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.093382 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.093667 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.094314 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.094496 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkzjk\" (UniqueName: \"kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.095386 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-khtlk\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.096028 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsxd9\" (UniqueName: \"kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.097052 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sfhc\" (UniqueName: \"kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.097620 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.097913 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.098147 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.098447 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.598416612 +0000 UTC m=+407.291081300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.100236 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.100358 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.100446 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.100562 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.600549023 +0000 UTC m=+407.293213641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.104282 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.105100 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.105922 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.106114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.109218 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.109303 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.109319 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.109399 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.609377205 +0000 UTC m=+407.302041823 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117099 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117199 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117234 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117310 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117394 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117439 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117472 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117499 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117534 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117585 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117609 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117663 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117686 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117875 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118005 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118381 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118622 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118646 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118766 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118885 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119084 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119154 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119507 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119658 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119767 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119907 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119965 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120064 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120117 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120187 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120230 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120374 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120395 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120417 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.121024 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.121115 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.121328 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.121373 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.121851 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122055 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122146 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122187 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120701 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120932 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120936 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120956 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120969 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122565 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122724 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122551 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.125983 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.126546 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.126735 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.127133 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.127341 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.130481 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.127269 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.130843 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.130563 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.127698 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.128566 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-v45vm\" (UniqueName: \"kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.130422 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjg2w\" (UniqueName: \"kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131232 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131327 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131442 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131483 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131586 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131675 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132135 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132338 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132142 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132170 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132163 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132204 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132744 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132909 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.134085 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.134290 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.134483 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.134637 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132938 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.135971 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.136029 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.136102 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.136117 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.136618 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.136632 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.136711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.636685536 +0000 UTC m=+407.329350374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.139030 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.139129 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.140384 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.140426 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.142102 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.142351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.142871 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.143111 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.143001 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.143590 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.143754 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.144419 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.144900 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.144629 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.145027 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.145886 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.160910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.170534 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkkfv\" (UniqueName: \"kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.175171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.182246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.187836 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.196391 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.197400 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.197445 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.197465 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.197537 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.697515744 +0000 UTC m=+407.390180472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.203657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.221286 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z2n9\" (UniqueName: \"kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.237889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.253051 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.254531 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x6dp\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:40 crc kubenswrapper[4183]: W0813 19:50:40.268483 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod410cf605_1970_4691_9c95_53fdc123b1f3.slice/crio-5716d33776fee1b3bfd908d86257b9ae48c1c339a2b3cc6d4177c4c9b6ba094e WatchSource:0}: Error finding container 5716d33776fee1b3bfd908d86257b9ae48c1c339a2b3cc6d4177c4c9b6ba094e: Status 404 returned error can't find the container with id 5716d33776fee1b3bfd908d86257b9ae48c1c339a2b3cc6d4177c4c9b6ba094e Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.279875 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.280084 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.280199 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.280339 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.780315891 +0000 UTC m=+407.472980619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.296267 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.296334 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.296430 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.79640468 +0000 UTC m=+407.489069408 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.298240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.339918 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.341980 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.367083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.375130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.396203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.419432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.428216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.428929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.454329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.454950 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.455056 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.455097 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.455129 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.455162 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455563 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455613 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455667 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.455638061 +0000 UTC m=+408.148302839 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455645 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455697 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455706 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.455685313 +0000 UTC m=+408.148350061 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455763 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.458383 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.458365089 +0000 UTC m=+408.151029697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459226 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459324 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459358 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459406 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459437 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459505 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459539 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459578 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459615 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459648 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459683 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459716 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459961 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460015 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460050 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460107 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460171 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460214 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460251 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460282 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460442 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.460420198 +0000 UTC m=+408.153084796 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460464 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.460456719 +0000 UTC m=+408.153121307 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460522 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460562 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.460553902 +0000 UTC m=+408.153218520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460611 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460639 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.460632514 +0000 UTC m=+408.153297132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460692 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460696 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.461516 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.461749 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.461956 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.462026 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.462389 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.462410 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.462422 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.462710 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463178 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463411 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463496 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463562 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463636 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463727 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463927 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463948 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.464018 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.464090 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.464170 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460764 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460727 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.460713906 +0000 UTC m=+408.153378524 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.467622 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.467600923 +0000 UTC m=+408.160265521 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.467661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.468046 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.468075 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.468108 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.468376 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469257 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469301 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469291952 +0000 UTC m=+408.161956570 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469333 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469323893 +0000 UTC m=+408.161988491 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469516 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469504128 +0000 UTC m=+408.162168726 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469542 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469528268 +0000 UTC m=+408.162193716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469566 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469557289 +0000 UTC m=+408.162221887 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469586 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.46957859 +0000 UTC m=+408.162243188 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469757 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469738414 +0000 UTC m=+408.162403012 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469837 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469768885 +0000 UTC m=+408.162433483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469890 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469852168 +0000 UTC m=+408.162516846 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.470911 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471023 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.47100016 +0000 UTC m=+408.163664788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471278 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471367 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.471356171 +0000 UTC m=+408.164020769 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471769 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.471755912 +0000 UTC m=+408.164420610 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471530 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471565 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.473353 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.473339137 +0000 UTC m=+408.166003755 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476701 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476679103 +0000 UTC m=+408.169343841 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476725 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476714784 +0000 UTC m=+408.169379372 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476740 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476733884 +0000 UTC m=+408.169398482 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476756 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476750155 +0000 UTC m=+408.169414753 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476873 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476764075 +0000 UTC m=+408.169428663 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476920 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476908999 +0000 UTC m=+408.169573587 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476936 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.47692974 +0000 UTC m=+408.169594448 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476958 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476952801 +0000 UTC m=+408.169617389 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476981 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476969741 +0000 UTC m=+408.169634329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477047 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477090 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.477080434 +0000 UTC m=+408.169745292 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.477122 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.477180 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.477220 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477358 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.477382853 +0000 UTC m=+408.170047471 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477447 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477472 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.477465965 +0000 UTC m=+408.170130573 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.489692 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.523155 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"13eba7880abbfbef1344a579dab2a0b19cce315561153e251e3263ed0687b3e7"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.523402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.548115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.593376 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9572cbf27a025e52f8350ba1f90df2f73ac013d88644e34f555a7ae71822234\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:23:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:07Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.597211 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"221a24b0d917be98aa8fdfcfe9dbbefc5cd678c5dd905ae1ce5de6a160842882"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.610491 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.610644 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.610692 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.610731 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.610768 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611144 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611178 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611207 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611237 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611340 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.611320121 +0000 UTC m=+408.303984859 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611377 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611400 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611417 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611428 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611451 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611465 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611482 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.611463675 +0000 UTC m=+408.304128403 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611249 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611511 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.611501356 +0000 UTC m=+408.304165954 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611560 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611591 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611600 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.611592309 +0000 UTC m=+408.304257037 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611629 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611646 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611766 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.611668531 +0000 UTC m=+408.304333209 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.612124 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.612165 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.612199 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.612225 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612287 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612310 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612332 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.61232174 +0000 UTC m=+408.304986358 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612348 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.61233993 +0000 UTC m=+408.305004558 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611862 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612369 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612383 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612414 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612406382 +0000 UTC m=+408.305070990 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612386 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612449 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612442863 +0000 UTC m=+408.305107581 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611906 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612473 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612485 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612523 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612515185 +0000 UTC m=+408.305179913 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611929 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612555 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612547486 +0000 UTC m=+408.305212094 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611939 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612577 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611968 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611977 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612627 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612635 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612582 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612576787 +0000 UTC m=+408.305241395 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612668 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612661629 +0000 UTC m=+408.305326227 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612682 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.61267664 +0000 UTC m=+408.305341228 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612696 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.61269076 +0000 UTC m=+408.305355348 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611995 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612721631 +0000 UTC m=+408.305386489 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611895 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612763 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612757312 +0000 UTC m=+408.305421930 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624197 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624285 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624301 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624598 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.6245759 +0000 UTC m=+408.317240508 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.624674 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624873 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624916 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.624902479 +0000 UTC m=+408.317567087 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.624954 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.625186 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.625201 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.625461 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.625442705 +0000 UTC m=+408.318107313 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.625313 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.625686 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.625842 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.625771574 +0000 UTC m=+408.318492644 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.625870 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.625940 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.626040 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.626080 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.626071212 +0000 UTC m=+408.318735830 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.626327 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.626382 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.626369941 +0000 UTC m=+408.319034559 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.626550 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.626884 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.627156 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.627138403 +0000 UTC m=+408.319803111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.627190 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.627273 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.627418 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.627715 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.627700459 +0000 UTC m=+408.320365287 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.634892 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.635391 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.635379579 +0000 UTC m=+408.328044317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635013 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635525 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635601 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635650 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635706 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635974 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.636026 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.636077 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.636256 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.636431 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.636477 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637053 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637104 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637405 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637440 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.637549 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637587 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637626 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.637656 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.637626603 +0000 UTC m=+408.330291271 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637707 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.637762 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638749 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638767 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638871 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638931 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.638898609 +0000 UTC m=+408.331563447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642340 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.642307307 +0000 UTC m=+408.334972075 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638000 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642396 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.642385259 +0000 UTC m=+408.335049937 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638343 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642446 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642467 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642531 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.642518783 +0000 UTC m=+408.335183461 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638426 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642590 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.642579444 +0000 UTC m=+408.335244132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637768 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.642677 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.642726 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.642885 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.642944 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.642993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643037 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643072 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643112 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643155 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643201 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643252 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643287 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643330 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643369 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643410 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643458 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643898 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644115 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644164 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644192 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644219 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644254 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644337 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644387 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644430 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644465 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644542 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644620 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644654 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644682 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652329 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652375 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652421 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652698 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652734 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653137 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653177 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653222 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653462 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653506 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653545 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653694 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653857 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.664754 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.667430 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.667539 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.667560 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.667659 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.66762997 +0000 UTC m=+408.360294598 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638983 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.667740 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.667720143 +0000 UTC m=+408.360384861 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669223 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669389 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669424 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669507 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.669483293 +0000 UTC m=+408.362147991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669624 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669672 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.669657688 +0000 UTC m=+408.362322306 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669957 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669979 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.670224 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.670275 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.670259045 +0000 UTC m=+408.362923784 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.670360 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.670603 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.670591155 +0000 UTC m=+408.363256073 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671119 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671139 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671306 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.671187692 +0000 UTC m=+408.363852400 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671532 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671596 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.671576703 +0000 UTC m=+408.364241381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671604 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671664 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.671651935 +0000 UTC m=+408.364316653 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671692 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671752 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.671736518 +0000 UTC m=+408.364401816 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671754 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671999 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.671988625 +0000 UTC m=+408.364653243 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.672531 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.672704 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.672687545 +0000 UTC m=+408.365352173 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.672716 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.672764 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.672756287 +0000 UTC m=+408.365420895 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639242 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.672971 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673014 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673033 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673099 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.673081226 +0000 UTC m=+408.365745884 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673173 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673221 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.67321258 +0000 UTC m=+408.365877198 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673264 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.673248731 +0000 UTC m=+408.365913349 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639324 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673286 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673336 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.673316633 +0000 UTC m=+408.365981241 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673408 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673426 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673437 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673488 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.673478917 +0000 UTC m=+408.366143535 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673539 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673578 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.67356302 +0000 UTC m=+408.366227628 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673643 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673692 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.673683063 +0000 UTC m=+408.366347691 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673757 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674115 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674094425 +0000 UTC m=+408.366759153 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674173 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674220 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674205408 +0000 UTC m=+408.366870036 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674282 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674322 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674309221 +0000 UTC m=+408.366973829 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674398 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674421 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674440 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674479 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674465796 +0000 UTC m=+408.367130424 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674534 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674566 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674551678 +0000 UTC m=+408.367216396 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674648 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674664 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674677 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674713 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674704263 +0000 UTC m=+408.367368891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675003 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675019 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675032 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675075 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.675066653 +0000 UTC m=+408.367731371 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675151 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675165 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675173 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675210 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.675198537 +0000 UTC m=+408.367863255 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675272 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675312 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.675294489 +0000 UTC m=+408.367959107 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675388 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675404 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675422 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675466 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.675448834 +0000 UTC m=+408.368113632 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675554 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675569 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675577 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675615 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.675600998 +0000 UTC m=+408.368265726 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676129 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676118663 +0000 UTC m=+408.368783501 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676208 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676222 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676230 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676282 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676273397 +0000 UTC m=+408.368938015 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639039 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676330 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676322999 +0000 UTC m=+408.368987687 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639667 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676374 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.67636126 +0000 UTC m=+408.369025868 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639693 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676422 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676411631 +0000 UTC m=+408.369076239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639744 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676454 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676466 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676511 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676497444 +0000 UTC m=+408.369162182 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.640267 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676561 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676553685 +0000 UTC m=+408.369218303 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.640674 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676608 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676595417 +0000 UTC m=+408.369260025 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.640955 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676653 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676637398 +0000 UTC m=+408.369302016 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.641007 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676699 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676688709 +0000 UTC m=+408.369353317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.641252 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677108 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.677093321 +0000 UTC m=+408.369757929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.641305 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677151 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.677144412 +0000 UTC m=+408.369809130 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.641344 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677342 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677364 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677509 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.677498012 +0000 UTC m=+408.370162740 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677625 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677668 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.677648977 +0000 UTC m=+408.370313595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677624 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677717 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.677706218 +0000 UTC m=+408.370370906 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678032 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678071 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678225 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678378 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678399 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678414 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.680703 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.682398 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.682423 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.682440 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.687249 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.688482 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"41d80ed1b6b3289201cf615c5e532a96845a5c98c79088b67161733f63882539"} Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.688504 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.688567 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689052 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689161 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689177 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689186 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689288 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689303 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689312 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689418 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689432 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689440 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689530 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689543 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689552 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689626 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.690089 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694406 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.678068569 +0000 UTC m=+408.370733357 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694531 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.694516299 +0000 UTC m=+408.387180897 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694557 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.694542109 +0000 UTC m=+408.387206697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694717 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.69456639 +0000 UTC m=+408.387230978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694739 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.694730915 +0000 UTC m=+408.387395613 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694759 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.694747315 +0000 UTC m=+408.387412013 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639365 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.699059 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.702968 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.699630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.709018 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.694766776 +0000 UTC m=+408.387431374 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.709300 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.70926622 +0000 UTC m=+408.401930828 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.709512 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.709495977 +0000 UTC m=+408.402160775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.709550 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.709529308 +0000 UTC m=+408.402193966 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.709577 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.709565979 +0000 UTC m=+408.402230657 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.714404 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.714227782 +0000 UTC m=+408.406892390 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.714517 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.71450534 +0000 UTC m=+408.407169938 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.714548 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.714531291 +0000 UTC m=+408.407196129 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.714580 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.714570242 +0000 UTC m=+408.407234840 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.714627 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.714703 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.714750 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.714913 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.714958 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.715021 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.715468 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.715519 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.715737 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.715893 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.715874899 +0000 UTC m=+408.408539737 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.715975 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.715989 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716030 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.716022723 +0000 UTC m=+408.408687341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716072 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.716051654 +0000 UTC m=+408.408716242 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716131 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716198 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.716190428 +0000 UTC m=+408.408855046 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716287 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716313 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716336 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716388 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.716376064 +0000 UTC m=+408.409040762 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716477 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719348 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719369 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719425 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.71940179 +0000 UTC m=+408.412066418 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719496 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719514 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719522 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719569 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.719549504 +0000 UTC m=+408.412214122 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719639 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719663 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719672 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719718 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.719706979 +0000 UTC m=+408.412371597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719877 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719924 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.719909655 +0000 UTC m=+408.412574383 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.757513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.764569 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l92hr" event={"ID":"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e","Type":"ContainerStarted","Data":"9bb711518b1fc4ac72f4ad05c59c2bd3bc932c94879c31183df088652e4ed2c3"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.790268 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"815c16566f290b783ea9eced9544573db3088d99a58cb4d87a1fd8ab2b69614e"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.797291 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: W0813 19:50:40.810977 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fb762d1_812f_43f1_9eac_68034c1ecec7.slice/crio-44d24fb11db7ae2742519239309e3254a495fb0556d8e82e16f4cb9c4b64108c WatchSource:0}: Error finding container 44d24fb11db7ae2742519239309e3254a495fb0556d8e82e16f4cb9c4b64108c: Status 404 returned error can't find the container with id 44d24fb11db7ae2742519239309e3254a495fb0556d8e82e16f4cb9c4b64108c Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.822586 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"5716d33776fee1b3bfd908d86257b9ae48c1c339a2b3cc6d4177c4c9b6ba094e"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.833599 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.833751 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.834230 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834607 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834649 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834664 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834725 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.834708246 +0000 UTC m=+408.527372974 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834866 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834883 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834892 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834927 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.834913601 +0000 UTC m=+408.527578409 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834978 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834988 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.835013 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.835004674 +0000 UTC m=+408.527669292 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.837241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.849250 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"e76d945a8cb210681a40e3f9356115ebf38b8c8873e7d7a82afbf363f496a845"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.873331 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.888954 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" event={"ID":"2b6d14a5-ca00-40c7-af7a-051a98a24eed","Type":"ContainerStarted","Data":"807117e45707932fb04c35eb8f8cd7233e9fecc547b5e6d3e81e84b6f57d09af"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.900523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.927267 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"e4abca68aabfc809ca21711270325e201599e8b85acaf41371638a0414333adf"} Aug 13 19:50:40 crc kubenswrapper[4183]: W0813 19:50:40.932948 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a23c0ee_5648_448c_b772_83dced2891ce.slice/crio-7bbc561a16cc9a56f4d08fa72e19c57f5c5cdb54ee1a9b77e752effc42fb180f WatchSource:0}: Error finding container 7bbc561a16cc9a56f4d08fa72e19c57f5c5cdb54ee1a9b77e752effc42fb180f: Status 404 returned error can't find the container with id 7bbc561a16cc9a56f4d08fa72e19c57f5c5cdb54ee1a9b77e752effc42fb180f Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.933327 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.960512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.978547 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.029130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.057008 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.105141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.140646 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.198939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.215139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.215402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.216639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.227373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.227519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.227623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.235359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.235483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.235600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.235938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.236348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.236468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.236593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.236690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.236976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.237322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.237482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.237596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.237688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.237876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.238125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.238248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.238531 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.238657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.239620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.239725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.239927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.240038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.240170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.249612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.249891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.249990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.250077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.250199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.250423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.250536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.251216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.252463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.252573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.252657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.252740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.252950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.256207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.300629 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.315526 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.334712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.378702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: W0813 19:50:41.431345 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc291782_27d2_4a74_af79_c7dcb31535d2.slice/crio-8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4 WatchSource:0}: Error finding container 8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4: Status 404 returned error can't find the container with id 8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4 Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.464467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473207 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473312 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.473293997 +0000 UTC m=+410.165958705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.473343 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.473384 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.473415 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.473454 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.473484 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473627 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473669 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.473660538 +0000 UTC m=+410.166325246 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473663 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473715 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473743 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.47373438 +0000 UTC m=+410.166398998 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473866 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.473765421 +0000 UTC m=+410.166430119 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.474724 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.474885 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.474898 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.474937 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.474923424 +0000 UTC m=+410.167588172 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.474971 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.474988 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475039 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475018606 +0000 UTC m=+410.167683294 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475055 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475090 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475112 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475080758 +0000 UTC m=+410.167745376 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475137 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.47512656 +0000 UTC m=+410.167791298 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475202 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475235 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475273 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475301 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475329 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475355 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475389 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475498 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475503 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475553 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475540571 +0000 UTC m=+410.168205349 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475584 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475587 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475592 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475614 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475626 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475614703 +0000 UTC m=+410.168279391 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475632 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475656 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475669 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475675 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475662325 +0000 UTC m=+410.168327093 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475697 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475687716 +0000 UTC m=+410.168352494 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475703 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475716 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475706626 +0000 UTC m=+410.168371284 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475720 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475616 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475723857 +0000 UTC m=+410.168388455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475745 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475737937 +0000 UTC m=+410.168402635 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475509 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475879 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475766 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482341 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482455 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482495 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482521 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482594 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482725 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482866 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.482770288 +0000 UTC m=+410.175434906 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482930 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482959 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.482951963 +0000 UTC m=+410.175616581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482992 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483014 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483000085 +0000 UTC m=+410.175664723 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483014 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483039 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483029485 +0000 UTC m=+410.175694083 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483043 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483054 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483046476 +0000 UTC m=+410.175711064 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482991 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483074 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483075 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483062736 +0000 UTC m=+410.175727414 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483099 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483092237 +0000 UTC m=+410.175756865 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483131 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483113728 +0000 UTC m=+410.175778356 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483155 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483145469 +0000 UTC m=+410.175810157 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483313 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483355 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483346034 +0000 UTC m=+410.176010642 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.532082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.564233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.584270 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.584384 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.584368602 +0000 UTC m=+410.277033220 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.584097 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.584529 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.585072 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.585285 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.585561 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.585653 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.585643078 +0000 UTC m=+410.278307816 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.585718 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.585879 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.585899 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.585890855 +0000 UTC m=+410.278555453 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.585980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.586115 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.586173 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.586213 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.586248 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.586644 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.588029 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.588496 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.588555 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.588628 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.597944 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.58676001 +0000 UTC m=+410.279427598 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.598293 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.598272999 +0000 UTC m=+410.290937717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.598413 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.598391513 +0000 UTC m=+410.291056211 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.598442 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.598431534 +0000 UTC m=+410.291096192 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.598457 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.598450934 +0000 UTC m=+410.291115522 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.598472 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.598465835 +0000 UTC m=+410.291130423 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.610340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.656593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.687893 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688018 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688058 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688110 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688160 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688194 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688223 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688254 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688280 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688314 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688350 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688378 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688402 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688437 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688498 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688528 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688552 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688583 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688614 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688642 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688664 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688694 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688746 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689104 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689144 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689172 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689203 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689228 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689255 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689290 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689316 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689344 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689374 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689434 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689462 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689488 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689520 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689543 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689568 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689600 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689626 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689688 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689713 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689737 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689851 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689887 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689910 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689937 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689966 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690007 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690031 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690054 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690079 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690101 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690123 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690128 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690167 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690184 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.690170346 +0000 UTC m=+410.382834964 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690323 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690337 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690369 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690378 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690393 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690411 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.690402152 +0000 UTC m=+410.383066880 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690433 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690450 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690461 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690497 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690509 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690522 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690524 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690532 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690561 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.690552977 +0000 UTC m=+410.383217655 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690583 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690593 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690605 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690609 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690613 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690639 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.690629299 +0000 UTC m=+410.383294037 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690663 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690667 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690680 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690688 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.690704621 +0000 UTC m=+410.383369349 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690688 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712456 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712535 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712639 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712681 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712717 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712754 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712940 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713007 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713042 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713077 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713113 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713142 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713166 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713255 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713293 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713337 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713679 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713888 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714150 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714505 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.71447942 +0000 UTC m=+410.407144068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.703566 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714557 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714544292 +0000 UTC m=+410.407208900 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705091 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714604 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714593884 +0000 UTC m=+410.407258612 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705144 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714639 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714632695 +0000 UTC m=+410.407297303 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705183 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714682 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714674726 +0000 UTC m=+410.407339334 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705281 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714719 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714739 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714917 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714769609 +0000 UTC m=+410.407434287 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705329 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714971 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714957694 +0000 UTC m=+410.407622322 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705387 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715010 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715002725 +0000 UTC m=+410.407667343 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705450 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715033 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715043 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715071 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715065347 +0000 UTC m=+410.407729965 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705499 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715089 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715103 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715147 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715136119 +0000 UTC m=+410.407800947 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705898 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715168 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715180 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715205 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715197771 +0000 UTC m=+410.407862389 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705954 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715254 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715244502 +0000 UTC m=+410.407909310 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705993 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715296 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715284993 +0000 UTC m=+410.407949611 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706313 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715323 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715332 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715361 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715351785 +0000 UTC m=+410.408016403 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706371 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715377 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715402 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715393497 +0000 UTC m=+410.408058115 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706420 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715456 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715439978 +0000 UTC m=+410.408104646 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706515 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715499 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715485659 +0000 UTC m=+410.408150317 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706554 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715551 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715539221 +0000 UTC m=+410.408203899 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706588 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715605 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715595002 +0000 UTC m=+410.408259680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706679 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715650 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715642164 +0000 UTC m=+410.408306782 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706725 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715696 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715687055 +0000 UTC m=+410.408351723 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706761 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715739 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715731526 +0000 UTC m=+410.408396204 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715931 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715974 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715988 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716016 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716027 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.716002084 +0000 UTC m=+410.408666832 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716033 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716096 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.716078396 +0000 UTC m=+410.408743094 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716174 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716216 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.71620303 +0000 UTC m=+410.408867638 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716266 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716304 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.716296222 +0000 UTC m=+410.408960840 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716360 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716419 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.716399095 +0000 UTC m=+410.409063783 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716502 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716517 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716536 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716569 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.71656071 +0000 UTC m=+410.409225338 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717416 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717468 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.717457456 +0000 UTC m=+410.410122084 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717527 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717571 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.717561238 +0000 UTC m=+410.410225856 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717625 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717657 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.717649561 +0000 UTC m=+410.410314179 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717711 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717754 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.717744934 +0000 UTC m=+410.410409562 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717921 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717959 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.717946559 +0000 UTC m=+410.410611188 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717999 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718035 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718026932 +0000 UTC m=+410.410691550 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718082 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718117 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718105784 +0000 UTC m=+410.410770392 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718167 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718198 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718188206 +0000 UTC m=+410.410852814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718250 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718289 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718280859 +0000 UTC m=+410.410945487 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718331 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718368 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718355621 +0000 UTC m=+410.411020239 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718428 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718446 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718460 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718488 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718480835 +0000 UTC m=+410.411145453 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718555 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718590 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718582918 +0000 UTC m=+410.411247526 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718642 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718676 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.71866425 +0000 UTC m=+410.411328868 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718731 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718747 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718759 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716666 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.735182 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.735145851 +0000 UTC m=+410.427810479 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690340 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737022 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737118 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.737087597 +0000 UTC m=+410.429752215 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737248 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737294 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.737283132 +0000 UTC m=+410.429947750 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737451 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737494 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.737477488 +0000 UTC m=+410.430142156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.738586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739061 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739120 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739101254 +0000 UTC m=+410.431765942 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739199 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739249 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739231228 +0000 UTC m=+410.431895916 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739307 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739347 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739338121 +0000 UTC m=+410.432002739 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739402 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739452 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739437684 +0000 UTC m=+410.432102632 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739549 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739569 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739581 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739641 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739626979 +0000 UTC m=+410.432291877 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739722 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.748345 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.748656 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.748993 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.749963 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.750098 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.750337 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.750452 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.750547 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.750722 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.751004 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.751197 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.751296 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.751441 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.751891 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690463 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752368 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752335 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752417 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752425 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752510 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752532 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752541 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752615 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752659 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752689 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.753277 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.753568 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.753876 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.754000 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.754208 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.754650 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755005 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755084 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755102 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755199 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755282 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755356 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755438 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755524 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755641 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755658 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755670 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755847 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755868 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755878 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755957 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755973 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755992 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756083 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756096 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756104 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756517 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756591 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756608 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.757937 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756534 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.762895 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772417 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739765183 +0000 UTC m=+410.432429871 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772602 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772576971 +0000 UTC m=+410.465241569 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772623 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772615092 +0000 UTC m=+410.465279680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772648 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772634573 +0000 UTC m=+410.465299171 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772676 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772665973 +0000 UTC m=+410.465330571 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772693 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772686474 +0000 UTC m=+410.465351072 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772724 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772706715 +0000 UTC m=+410.465371313 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773183 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773166568 +0000 UTC m=+410.465831176 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773211 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773197619 +0000 UTC m=+410.465862767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773244 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773227879 +0000 UTC m=+410.465892537 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773267 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.77325492 +0000 UTC m=+410.465919588 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773283 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773275991 +0000 UTC m=+410.465940579 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773302 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773291351 +0000 UTC m=+410.465955939 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773318 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773310462 +0000 UTC m=+410.465975170 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773341 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773325992 +0000 UTC m=+410.465990590 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773363 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773351593 +0000 UTC m=+410.466016191 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773381 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773373204 +0000 UTC m=+410.466037792 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773391064 +0000 UTC m=+410.466055662 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773424 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773415815 +0000 UTC m=+410.466080413 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773442 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773434505 +0000 UTC m=+410.466099093 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773460 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773450346 +0000 UTC m=+410.466114944 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773486 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773477737 +0000 UTC m=+410.466142335 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773501 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773493747 +0000 UTC m=+410.466158585 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773524 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773509958 +0000 UTC m=+410.466174656 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773545 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773538918 +0000 UTC m=+410.466203506 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773562 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773554819 +0000 UTC m=+410.466219527 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773584 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773570469 +0000 UTC m=+410.466235057 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773584 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773607 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.77360082 +0000 UTC m=+410.466265408 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773628 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773620201 +0000 UTC m=+410.466284799 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773648 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773637351 +0000 UTC m=+410.466301939 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773663 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773655842 +0000 UTC m=+410.466320440 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773678 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773671472 +0000 UTC m=+410.466336070 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773700 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773686623 +0000 UTC m=+410.466351391 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773726 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773715643 +0000 UTC m=+410.466380241 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.775512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.816971 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817039 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817074 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817105 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817136 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817170 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817217 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817242 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817286 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817340 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817375 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.822371 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.822540 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.822520858 +0000 UTC m=+410.515185666 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823111 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827029 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827017917 +0000 UTC m=+410.519682545 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823314 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827060 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827080 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827112 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827104519 +0000 UTC m=+410.519769127 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823368 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827146 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.82713893 +0000 UTC m=+410.519803548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823422 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827164 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827171 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827192 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827186582 +0000 UTC m=+410.519851190 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823486 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827211 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827224 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827254 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827246873 +0000 UTC m=+410.519911481 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823527 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827274 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827298 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827336 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827328276 +0000 UTC m=+410.519992884 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823577 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827361 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827372 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827412 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827392608 +0000 UTC m=+410.520057226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823611 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827475 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.82746769 +0000 UTC m=+410.520132368 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.823960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.824019 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827643 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827672 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827664485 +0000 UTC m=+410.520329103 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.824606 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827714 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827705556 +0000 UTC m=+410.520370174 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.828028 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.828078 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.834861 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.836083 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.836070246 +0000 UTC m=+410.528734984 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.835018 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.836549 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.836639 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.836863 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.836757605 +0000 UTC m=+410.529422303 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.900416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.929755 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.929893 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.930475 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932632 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932688 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932702 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932873 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932894 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932930 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.932914523 +0000 UTC m=+410.625579141 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932993 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.933008 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.933016 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.933042 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.933033907 +0000 UTC m=+410.625698525 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.933284 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.933273204 +0000 UTC m=+410.625937902 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.980329 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:41.999954 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerStarted","Data":"8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.001483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.013623 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dn27q" event={"ID":"6a23c0ee-5648-448c-b772-83dced2891ce","Type":"ContainerStarted","Data":"7bbc561a16cc9a56f4d08fa72e19c57f5c5cdb54ee1a9b77e752effc42fb180f"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.022652 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"7f52ab4d1ec6be2d7d4c2b684f75557c65a5b3424d556a21053e8abd54d2afd9"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.037563 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" event={"ID":"bf1a8b70-3856-486f-9912-a2de1d57c3fb","Type":"ContainerStarted","Data":"55fa820b6afd0d7cad1d37a4f84deed3f0ce4495af292cdacc5f97f75e79113b"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.044591 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" event={"ID":"9fb762d1-812f-43f1-9eac-68034c1ecec7","Type":"ContainerStarted","Data":"44d24fb11db7ae2742519239309e3254a495fb0556d8e82e16f4cb9c4b64108c"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.038442 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.045093 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.045184 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.045279 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.045395 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:42Z","lastTransitionTime":"2025-08-13T19:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.056295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.091309 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.125146 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.125191 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.125204 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.125228 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.125259 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:42Z","lastTransitionTime":"2025-08-13T19:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.149043 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.149323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.164679 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.165008 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.165032 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.165056 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.165121 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:42Z","lastTransitionTime":"2025-08-13T19:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.203542 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.208455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.208726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.208959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.209103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.209297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.209394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.209484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.209586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.209709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.210605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.236393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.305114 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.411594 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48c1471ee6eaa615e5b0e19686e3fafc0f687dc03625988c88b411dc682d223f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:24:26Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.417054 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.417101 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.417123 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.417160 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.417211 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:42Z","lastTransitionTime":"2025-08-13T19:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.484289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.485059 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.511656 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.511714 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.511729 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.511747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.511766 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:42Z","lastTransitionTime":"2025-08-13T19:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.548476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.567581 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.567636 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.604743 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.632592 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.674444 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.762492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.812684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.840428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.906099 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.004691 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.041613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.062631 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9"} Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.210345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.210657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.210731 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.211050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.211117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.211212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.211270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.211359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.211422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.211519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.211569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.211652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.213302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213391 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.213410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.213515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.213642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.213974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214123 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.216027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.216227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.216348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.216673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.216982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.218253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.218319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.343986 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.422944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.514687 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.514748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.514897 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515258 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.515439 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.515471 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515514 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.515488903 +0000 UTC m=+414.208153801 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515526 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.515575 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515590 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.515573466 +0000 UTC m=+414.208238094 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515625 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515659 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.515645408 +0000 UTC m=+414.208313616 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.515626 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515308 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515699 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515731 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515743 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.51573433 +0000 UTC m=+414.208399048 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515759 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.515751901 +0000 UTC m=+414.208416629 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.515696 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515928 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515990 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.515979677 +0000 UTC m=+414.208644585 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.516530 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.516514443 +0000 UTC m=+414.209179041 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.516683 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.517085 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.517051798 +0000 UTC m=+414.209716786 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518172 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518245 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518335 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518379 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518431 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518444 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518493 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.518483599 +0000 UTC m=+414.211148187 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518553 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518563 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518589 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518600 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.518591882 +0000 UTC m=+414.211256610 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518644 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519254 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.51923774 +0000 UTC m=+414.211902459 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518648 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519321 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.519309783 +0000 UTC m=+414.211974501 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518707 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519375 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.519361484 +0000 UTC m=+414.212026322 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518723 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519424 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519450 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519500 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.519488108 +0000 UTC m=+414.212152826 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518725 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519560 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.519543879 +0000 UTC m=+414.212208827 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518744 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519607 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.519597711 +0000 UTC m=+414.212262419 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.520134 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.520193 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.520174697 +0000 UTC m=+414.212839425 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.520235 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.520625 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.520867 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.520846137 +0000 UTC m=+414.213510855 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.520928 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.521262 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.524465 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.52444889 +0000 UTC m=+414.217113698 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.526023 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.526585 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.526647 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.526632962 +0000 UTC m=+414.219297700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.526707 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.527516 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.527570 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.527557568 +0000 UTC m=+414.220222276 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.527322 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.528049 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.528102 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.528090394 +0000 UTC m=+414.220755172 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.528138 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.529140 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.529645 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.529621587 +0000 UTC m=+414.222286265 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.529743 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.530001 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.530276 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.530723 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.531437 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.531413499 +0000 UTC m=+414.224078277 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.531955 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.537273 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.537219314 +0000 UTC m=+414.229884073 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.537416 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.635495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.640046 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.640160 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.640227 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.640281 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.641707 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.641996 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.642728 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645430 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645523 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.645496509 +0000 UTC m=+414.338161197 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645598 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.645637393 +0000 UTC m=+414.338302011 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645695 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645740 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.645724716 +0000 UTC m=+414.338389334 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646011 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646057 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.646046705 +0000 UTC m=+414.338711533 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646110 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646147 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.646139048 +0000 UTC m=+414.338803666 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646248 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646300 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.646286252 +0000 UTC m=+414.338950920 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646358 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.646388525 +0000 UTC m=+414.339053233 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646647 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646703 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.646685713 +0000 UTC m=+414.339350411 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.647645 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.647740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.648123 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.648200 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.648184686 +0000 UTC m=+414.340849784 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.662162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.714059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750487 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750667 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750698 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750731 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750867 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750922 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750950 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750977 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751001 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751024 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751056 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751088 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751117 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751150 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751208 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751242 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751324 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751354 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751388 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751434 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751469 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751503 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751536 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751567 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751597 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751625 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751657 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751687 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751923 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752036 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752069 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752098 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752121 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752152 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752174 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752197 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752224 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752257 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752688 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753503 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753528 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753544 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753632 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753713 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753901 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754029 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754090 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754142 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754205 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754217 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754227 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754311 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754386 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754638 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754658 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754666 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755092 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755126 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755136 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755431 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755446 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755453 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762286 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762364 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762430 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762444 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762500 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762545 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762599 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763275 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763423 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763463 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763538 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763608 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763626 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763638 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763686 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763747 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763942 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763995 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764065 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764110 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764155 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764215 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764273 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764319 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764384 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764405 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764415 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764469 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764527 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764567 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764628 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764693 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764706 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764714 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764764 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.765020 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.770704 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.770865 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771036 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.771071 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771073 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771099 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.771108 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771182 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771155751 +0000 UTC m=+414.463820489 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771200 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771237 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771224952 +0000 UTC m=+414.463889581 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.771295 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771302 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771329 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771341 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.771349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771377 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771366877 +0000 UTC m=+414.464031495 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771406 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.771426 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771441 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771428418 +0000 UTC m=+414.464093146 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771463 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771454529 +0000 UTC m=+414.464119237 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771482 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.77147303 +0000 UTC m=+414.464137728 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771505 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.77149748 +0000 UTC m=+414.464162068 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771509 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771519 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771512321 +0000 UTC m=+414.464176909 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771525 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771541 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771542 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771533591 +0000 UTC m=+414.464198189 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771594 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771582363 +0000 UTC m=+414.464246961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771595 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771617 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771618 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771605553 +0000 UTC m=+414.464270151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771626 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771638 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771631324 +0000 UTC m=+414.464295922 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771657 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771649985 +0000 UTC m=+414.464314703 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771674 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771666855 +0000 UTC m=+414.464331573 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771742 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771732037 +0000 UTC m=+414.464396745 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771767 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771754358 +0000 UTC m=+414.464421876 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772466 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772456498 +0000 UTC m=+414.465121096 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772483 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772475938 +0000 UTC m=+414.465140536 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772499 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772492489 +0000 UTC m=+414.465157087 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772522 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772509489 +0000 UTC m=+414.465174077 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772540 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.77253382 +0000 UTC m=+414.465198408 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772558 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.7725487 +0000 UTC m=+414.465213298 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772573 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772566491 +0000 UTC m=+414.465231089 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772589 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772581841 +0000 UTC m=+414.465246439 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772604 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772597632 +0000 UTC m=+414.465262230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772626 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772617542 +0000 UTC m=+414.465282140 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772645 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772638463 +0000 UTC m=+414.465303061 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772660 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772653053 +0000 UTC m=+414.465317641 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772678 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772669304 +0000 UTC m=+414.465333902 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772695 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772685684 +0000 UTC m=+414.465350282 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772704965 +0000 UTC m=+414.465369563 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772726 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772720015 +0000 UTC m=+414.465384603 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772741 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772734486 +0000 UTC m=+414.465399074 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772757 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772750336 +0000 UTC m=+414.465415044 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772974 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772764336 +0000 UTC m=+414.465428924 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773007 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772989473 +0000 UTC m=+414.465654061 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773031 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773023504 +0000 UTC m=+414.465688102 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773047 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773039184 +0000 UTC m=+414.465703772 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773064 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773054745 +0000 UTC m=+414.465719343 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773082 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773074875 +0000 UTC m=+414.465739463 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773098 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773091126 +0000 UTC m=+414.465755714 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773114 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773107246 +0000 UTC m=+414.465771844 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773129 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773122747 +0000 UTC m=+414.465787345 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773153 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773142567 +0000 UTC m=+414.465807165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773171 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773164398 +0000 UTC m=+414.465828996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773194 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773181148 +0000 UTC m=+414.465845746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773268 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773337 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773432 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773463 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773499 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773531 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773569 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773594 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773642 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773667 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773702 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773734 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774431 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774451 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774466 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774535 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774768 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774878 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774962 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775033 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775086 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775161 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775174 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775182 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775248 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775309 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775364 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775443 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775457 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775466 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775514 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.776292 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.776311 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.776325 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.776367 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.776356609 +0000 UTC m=+414.469021237 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780111 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780096096 +0000 UTC m=+414.472760724 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780142 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780132587 +0000 UTC m=+414.472797185 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780165 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780157448 +0000 UTC m=+414.472822046 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780186 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780177648 +0000 UTC m=+414.472842236 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780204 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780197099 +0000 UTC m=+414.472861697 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780223 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.78021662 +0000 UTC m=+414.472881218 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780241 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.78023447 +0000 UTC m=+414.472899068 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780256 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.78024897 +0000 UTC m=+414.472913558 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780272 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780265931 +0000 UTC m=+414.472930519 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780298 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780289912 +0000 UTC m=+414.472954510 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780313 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780306622 +0000 UTC m=+414.472971350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780345 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780336383 +0000 UTC m=+414.473000981 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780365 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780357064 +0000 UTC m=+414.473021662 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780386 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780378114 +0000 UTC m=+414.473042702 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780429 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780414835 +0000 UTC m=+414.473079633 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.780470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.780511 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.780546 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780706 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780756 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780948 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780997 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.781017 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780997852 +0000 UTC m=+414.473662700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780954 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.781041 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.781032083 +0000 UTC m=+414.473696701 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.781057 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.781070 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.781134 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.781122675 +0000 UTC m=+414.473787603 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.800352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.859673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.882719 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.882877 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.882935 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.882966 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883009 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.882982557 +0000 UTC m=+414.575647395 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883026 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883020118 +0000 UTC m=+414.575684866 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.883117 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.883382 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.883418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.883493 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.883551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883672 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883699 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883719 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883708177 +0000 UTC m=+414.576372795 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883745 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883735428 +0000 UTC m=+414.576400736 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883759 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883880 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883868982 +0000 UTC m=+414.576533900 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883914 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883937 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883951 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883952 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883980 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883972025 +0000 UTC m=+414.576636743 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884001 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883990715 +0000 UTC m=+414.576655423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884158 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884201 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884387 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884414 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884488 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884514 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884567 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884579 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884648 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884652 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884592 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884706 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884720 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884721 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884728 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884744 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884883 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884899 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884909 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884602 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.884590163 +0000 UTC m=+414.577254881 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885048 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885105 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885163 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885200 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885248 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885283 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885286 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885269792 +0000 UTC m=+414.577934380 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885357 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885380 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885361995 +0000 UTC m=+414.578026673 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885402 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885392605 +0000 UTC m=+414.578057263 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885417 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885411186 +0000 UTC m=+414.578075784 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885438 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885440 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885432107 +0000 UTC m=+414.578096735 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885467 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885460627 +0000 UTC m=+414.578125335 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885487 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885480158 +0000 UTC m=+414.578144746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885506 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885520 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885542 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885532079 +0000 UTC m=+414.578196987 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885575 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885622 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885638 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885652 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885717 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885677414 +0000 UTC m=+414.578342142 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885768 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886079 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886101 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886112 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.886423 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.886456 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886486 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886467606 +0000 UTC m=+414.579132234 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886512 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886504607 +0000 UTC m=+414.579169205 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886524 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886546 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886556 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886525 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886519508 +0000 UTC m=+414.579184096 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886587 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886616 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886626 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886633 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886625161 +0000 UTC m=+414.579289759 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886643371 +0000 UTC m=+414.579308079 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.886619 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886673 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886664362 +0000 UTC m=+414.579329160 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.886722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887184 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887236 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887378 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887448 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887539 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887572 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887608 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887633 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887706 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.887942 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.887984 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.887997 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888029 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888020291 +0000 UTC m=+414.580684909 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888083 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888098 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888108 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888134 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888126604 +0000 UTC m=+414.580791512 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888172 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888198 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888192275 +0000 UTC m=+414.580856993 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888244 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888255 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888263 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888290 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888280448 +0000 UTC m=+414.580945176 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888295 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888313 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888328 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888350 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.88833916 +0000 UTC m=+414.581003778 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888370 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.8883598 +0000 UTC m=+414.581024398 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888382 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888393 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888402 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888428 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888421622 +0000 UTC m=+414.581086350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888470 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888482 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888490 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888511 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888526 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888535 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888514 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888507274 +0000 UTC m=+414.581172002 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888566 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888576 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888568106 +0000 UTC m=+414.581232814 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888582 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888593 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888613 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888667 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888656339 +0000 UTC m=+414.581321067 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888693 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888682109 +0000 UTC m=+414.581346777 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888700 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888722161 +0000 UTC m=+414.581386899 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888469 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888867 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888880 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888915 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888906736 +0000 UTC m=+414.581571474 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.890483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.889077 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.88905856 +0000 UTC m=+414.581726188 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.949696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.989954 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990154 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990313 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990332 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990405 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.990387736 +0000 UTC m=+414.683052464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.990492 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990832 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990886 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990991 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.990967033 +0000 UTC m=+414.683631781 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.991646 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.991690 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.991701 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.991906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.992018 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.991993472 +0000 UTC m=+414.684658100 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.005340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.099383 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.121366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.129156 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.140241 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" event={"ID":"9fb762d1-812f-43f1-9eac-68034c1ecec7","Type":"ContainerStarted","Data":"c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.155201 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" event={"ID":"bf1a8b70-3856-486f-9912-a2de1d57c3fb","Type":"ContainerStarted","Data":"3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.193054 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e" exitCode=0 Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.193152 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.193071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.209923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.210130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.210277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.210405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.210550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.210707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.211035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.211115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.225975 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.228432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.295126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.356001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.423151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.435161 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.442647 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.442750 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.459239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.504534 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.564752 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.591753 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.621835 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.651268 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.703514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.751489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.837500 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.882690 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.958035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.019943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48c1471ee6eaa615e5b0e19686e3fafc0f687dc03625988c88b411dc682d223f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:24:26Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.096662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.130982 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.162479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.198924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.209408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.209645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.209706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.209909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.209959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.210039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.210087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.210185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.210250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.210355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.210401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.210504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.210558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.210647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.210709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.211069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.211241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.211414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.211652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212438 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.213100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.213203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.213280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.213373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.213418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.213498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.213548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.213632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.213675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.213753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.214201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.214517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.214693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214844 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.214883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.215004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.215181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.215387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.215536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.215694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.216020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.216094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.216171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.216235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.216268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.216348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.232187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.241962 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerStarted","Data":"ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.253237 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dn27q" event={"ID":"6a23c0ee-5648-448c-b772-83dced2891ce","Type":"ContainerStarted","Data":"5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.258222 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l92hr" event={"ID":"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e","Type":"ContainerStarted","Data":"dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.277664 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.294170 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.302644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.308194 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.348601 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.350341 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.381324 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.430546 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.430641 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.435459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.478754 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.517912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.576271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.613625 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.656204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.706152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.751087 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.800708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.838103 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.871207 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.923054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.965925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.003440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.040298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.084672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.111724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.150511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.205934 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.208289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.208472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.208577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.208624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.208672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.209258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.209439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.209623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.209989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.210222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.210708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.211035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.211154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.211304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.326550 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212"} Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.339235 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4"} Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.410512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.471721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.504611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.620223 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.620387 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.716418 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.770626 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9572cbf27a025e52f8350ba1f90df2f73ac013d88644e34f555a7ae71822234\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:23:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:07Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.824290 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.209741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.209893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.209975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210089 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210438 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.212011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.212293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.212650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.212935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.213066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.213882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.214011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.214069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.214134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.214206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.214243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.214309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.214513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.214643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.268143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.305553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.366437 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303"} Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.378926 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f"} Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.434695 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.435147 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.501495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.613613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.613707 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.613762 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.614007 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.614054 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.614478 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.614559 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.614536227 +0000 UTC m=+422.307200935 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615022 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615069 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.615058332 +0000 UTC m=+422.307722950 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615160 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615278 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.615251447 +0000 UTC m=+422.307916065 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615467 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615539 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.615516745 +0000 UTC m=+422.308181523 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615632 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615684 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.615670329 +0000 UTC m=+422.308335107 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.617234 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.617327 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.617377 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.617469 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.617525 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.617514212 +0000 UTC m=+422.310179020 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.617585 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.617638 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.617619875 +0000 UTC m=+422.310285223 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.617889 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.618134 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.618139 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.618166 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.61815581 +0000 UTC m=+422.310820398 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.618732 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.618712646 +0000 UTC m=+422.311377264 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.619445 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.619619 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.619526 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.619760 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.619749666 +0000 UTC m=+422.312414274 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.619942 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620334 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.620317352 +0000 UTC m=+422.312982040 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.619764 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620417 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620507 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620542 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620562 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620621 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.62060515 +0000 UTC m=+422.313269988 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620660 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620701 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620729 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.620718823 +0000 UTC m=+422.313383451 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620767 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620911 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620943 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.620927979 +0000 UTC m=+422.313592607 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620967 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621007 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.620996711 +0000 UTC m=+422.313661329 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621069 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621131 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621192 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621222 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621240 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621258 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621292 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621369 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621397 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621417 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621513 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621566 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621551147 +0000 UTC m=+422.314215955 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621594 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621599 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621584858 +0000 UTC m=+422.314249506 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621659 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621712 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621699641 +0000 UTC m=+422.314364429 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621734 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621768 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621760343 +0000 UTC m=+422.314424931 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621902 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621932 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621924168 +0000 UTC m=+422.314588876 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621947 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621940318 +0000 UTC m=+422.314604916 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622000 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622028 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.622021571 +0000 UTC m=+422.314686179 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621068 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622167 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.622137704 +0000 UTC m=+422.314802412 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622095 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622209 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622258 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.622245777 +0000 UTC m=+422.314910525 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622400 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.622379931 +0000 UTC m=+422.315044609 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.689579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725065 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725167 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725210 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725303 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725415 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.725388565 +0000 UTC m=+422.418053453 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725475 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725577 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725610 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725660 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.725643622 +0000 UTC m=+422.418308510 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725691 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725698 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725723 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725741 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.725732255 +0000 UTC m=+422.418396873 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725927 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725974 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726025 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.726013713 +0000 UTC m=+422.418678341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726060 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726117 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.726105455 +0000 UTC m=+422.418770083 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726149 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.726140266 +0000 UTC m=+422.418804854 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726184 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726227 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.726216669 +0000 UTC m=+422.418881407 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.727169 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.727244 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.727638 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.727673 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.72766413 +0000 UTC m=+422.420328748 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.727720 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.727753 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.727745212 +0000 UTC m=+422.420409830 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.743070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.798019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.828950 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.829185 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.829269 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.829246243 +0000 UTC m=+422.521911081 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.829461 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.829506 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.8294957 +0000 UTC m=+422.522160428 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.829528 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.829696 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.829737 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.829764 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.829871 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830151 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830187 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830228 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830262 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830296 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830322 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830348 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830386 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830448 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830482 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830510 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830539 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830576 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830633 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830670 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830703 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830745 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830770 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831006 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831033 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831064 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831097 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831129 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831227 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831330 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831491 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831597 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831665 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831757 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831900 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831987 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832061 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832105 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832239 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832297 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832351 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832393 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832452 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832480 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832536 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832586 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832643 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832694 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832882 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832924 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832974 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833267 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833318 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833350 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833396 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833484 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833527 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833564 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833638 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833685 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833964 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.837627 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.837965 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838021 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838035 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838051 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838103 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838083896 +0000 UTC m=+422.530748614 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838124 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838113817 +0000 UTC m=+422.530778425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.830022 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838135 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838144 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838152 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838161 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.830050 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.830098 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838200 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838186659 +0000 UTC m=+422.530851347 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838247 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838268 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838254961 +0000 UTC m=+422.530919669 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838285 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838279041 +0000 UTC m=+422.530943629 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838300 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838293802 +0000 UTC m=+422.530958400 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838317 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838308452 +0000 UTC m=+422.530973040 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838321 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838330 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838360 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838348013 +0000 UTC m=+422.531012731 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838371 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838410 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838401915 +0000 UTC m=+422.531066623 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838414 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838479 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838539 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838590 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838447 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838439376 +0000 UTC m=+422.531103994 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838621 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838606801 +0000 UTC m=+422.531271419 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838641 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838633011 +0000 UTC m=+422.531297599 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838645 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838654 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838648692 +0000 UTC m=+422.531313290 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838676 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838667722 +0000 UTC m=+422.531332330 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838701 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838713 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838742 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838735834 +0000 UTC m=+422.531400442 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838844 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838834747 +0000 UTC m=+422.531499465 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838879 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838900 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838906 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838923 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838931 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838948 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838959 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838951881 +0000 UTC m=+422.531616619 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838911 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838975 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838967321 +0000 UTC m=+422.531631929 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838993 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838980911 +0000 UTC m=+422.531645529 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839013 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839036 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839135 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839156 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839165 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839175 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839044 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839037803 +0000 UTC m=+422.531702421 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839200 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839191857 +0000 UTC m=+422.531856465 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839218 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839211748 +0000 UTC m=+422.531876336 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839242 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839231119 +0000 UTC m=+422.531895707 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839256 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839294 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.83928366 +0000 UTC m=+422.531948278 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839297 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839329 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839321921 +0000 UTC m=+422.531986539 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839333 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839371 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839358622 +0000 UTC m=+422.532023240 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839220 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839408 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839397443 +0000 UTC m=+422.532062061 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839410 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839442 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839446 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839438954 +0000 UTC m=+422.532103652 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839478 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839479 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839472065 +0000 UTC m=+422.532136653 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839503 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839509 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839502776 +0000 UTC m=+422.532167464 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839539 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839528127 +0000 UTC m=+422.532192895 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839565 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839585 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839591 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839625 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839631 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.83962072 +0000 UTC m=+422.532285398 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839595 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839659 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.83964991 +0000 UTC m=+422.532314629 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839680 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839670371 +0000 UTC m=+422.532335129 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839707 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839739 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839742 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839734773 +0000 UTC m=+422.532399471 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839867 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839768684 +0000 UTC m=+422.532433272 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839898 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839905 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839920 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839931 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839943 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839935339 +0000 UTC m=+422.532599957 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839959 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839949249 +0000 UTC m=+422.532613867 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839992 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839995 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840004 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840012 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840020 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840014021 +0000 UTC m=+422.532678639 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840036 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840030151 +0000 UTC m=+422.532694759 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840059 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840069 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840078 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840110 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840101113 +0000 UTC m=+422.532765811 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840111 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840138 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839095 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840197 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.830081 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.837980 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840139 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840133744 +0000 UTC m=+422.532798362 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840215 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840232 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840222027 +0000 UTC m=+422.532886615 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840249 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840263 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840287 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840296 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840243467 +0000 UTC m=+422.532908335 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840317 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840309599 +0000 UTC m=+422.532974187 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840070 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840349 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840358 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840351351 +0000 UTC m=+422.533015969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839372 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840300 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840395 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840379601 +0000 UTC m=+422.533044309 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840416 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840406552 +0000 UTC m=+422.533071170 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840431 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840426093 +0000 UTC m=+422.533090681 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840459 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840474 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840481 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840508 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840566 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840614 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840630 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840634 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840640 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840648 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840657 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840510 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840500815 +0000 UTC m=+422.533165523 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840708 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840699081 +0000 UTC m=+422.533363669 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840868 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840856325 +0000 UTC m=+422.533521013 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840872 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840893 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840884726 +0000 UTC m=+422.533549324 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840481 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840710 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840753 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840908 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840902896 +0000 UTC m=+422.533567504 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840963 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840950378 +0000 UTC m=+422.533614996 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840990 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840977998 +0000 UTC m=+422.533642586 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841008 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840998939 +0000 UTC m=+422.533663527 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841015 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841021 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.84101563 +0000 UTC m=+422.533680218 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841029 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841043 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841077 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.841066641 +0000 UTC m=+422.533731249 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841092 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841106 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841114 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841131 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841149 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.841139653 +0000 UTC m=+422.533804261 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840567 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841171 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.841163104 +0000 UTC m=+422.533827702 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841187 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.841179374 +0000 UTC m=+422.533844092 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.842130 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.842245 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.842271 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.842559 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.842411779 +0000 UTC m=+422.535076617 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.874231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.938979 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.939137 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.939170 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.939409 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.939429 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.939592 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.939649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.939630198 +0000 UTC m=+422.632294816 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.939725 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940008 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.939761842 +0000 UTC m=+422.632426710 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940037 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940108 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940129 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940141 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940181 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.940169033 +0000 UTC m=+422.632833711 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940232 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940265 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940279 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.940268286 +0000 UTC m=+422.632932904 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940313 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940334 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940345 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940349 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940366 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940391 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940397 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.9403891 +0000 UTC m=+422.633053798 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940458 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940479 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940488 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940519 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.940511083 +0000 UTC m=+422.633175711 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940568 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940727 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940908 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.940896614 +0000 UTC m=+422.633561322 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940987 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941019 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.941011157 +0000 UTC m=+422.633675865 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941019 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941070 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941084 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941093 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941113 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941129 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94111815 +0000 UTC m=+422.633782768 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940570 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941148 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941156 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941165 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941192 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.941180622 +0000 UTC m=+422.633845240 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941221 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941231 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941249 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941276 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.941259365 +0000 UTC m=+422.633924053 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941301 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941312 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941329 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.941322536 +0000 UTC m=+422.633987234 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941361 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941384 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941402 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941412 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941426 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941438 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941445 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941448 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94143846 +0000 UTC m=+422.634103238 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941386 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941469 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94146257 +0000 UTC m=+422.634127188 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941502 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941509 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941518 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941520 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941559 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941573 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941584 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941602 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941613 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941519 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941664 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941666 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941713 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941732 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941676 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941534 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.941525612 +0000 UTC m=+422.634190390 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942229 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942264 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942374 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942644 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942716 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942875 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943075 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943146 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943200 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943281 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943332 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943392 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943395 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943458 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943438177 +0000 UTC m=+422.636102905 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943499 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943542 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943551 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943529049 +0000 UTC m=+422.636193667 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943594 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943629 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943615322 +0000 UTC m=+422.636280020 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943658 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943680 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943692 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943701 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943692 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943684044 +0000 UTC m=+422.636348752 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943750 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943739215 +0000 UTC m=+422.636403903 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944075 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944168 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944191 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943764986 +0000 UTC m=+422.636429574 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944214 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944206049 +0000 UTC m=+422.636870647 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944226 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944220879 +0000 UTC m=+422.636885467 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944242 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94423529 +0000 UTC m=+422.636899878 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944245 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944259 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944269 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944261 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94425088 +0000 UTC m=+422.636915568 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944304 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944316 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944307182 +0000 UTC m=+422.636971770 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944336 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944329442 +0000 UTC m=+422.636994150 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944356 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944393 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944408 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944417 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944394 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944385594 +0000 UTC m=+422.637050312 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944476 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944492 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944503 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944516 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944527 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944566 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944577 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944479 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944469226 +0000 UTC m=+422.637133924 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944621 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944622 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94461294 +0000 UTC m=+422.637277528 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944681 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944671092 +0000 UTC m=+422.637335680 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944696 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944689213 +0000 UTC m=+422.637353801 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944705323 +0000 UTC m=+422.637369911 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944733 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944723814 +0000 UTC m=+422.637388402 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944750 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944744304 +0000 UTC m=+422.637408902 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944764 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944758215 +0000 UTC m=+422.637422813 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943607 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.945173 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.945145896 +0000 UTC m=+422.637813094 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.970464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.025491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.047323 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.047561 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048135 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048174 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048191 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048211 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048229 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048241 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.048143 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048473 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048542 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:56.048243422 +0000 UTC m=+422.740908190 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048529 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048654 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:56.048625133 +0000 UTC m=+422.741289911 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048709 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:56.048689165 +0000 UTC m=+422.741354073 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.160189 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.209087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.209565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.209755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.210074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.210224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.210401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.210532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.210707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.210957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.211139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.211280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.211473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.211613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.212049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.221377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.295030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.390246 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b" exitCode=0 Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.390423 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b"} Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.397025 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b"} Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.435909 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.436378 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.645136 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.691565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.869448 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.919496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.108730 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.211247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.213345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.213879 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214846 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.215059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.215163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.215242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.215340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.215578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.218091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.218179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.218274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.218360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.218493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.221470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.433890 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.433965 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.675231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.737545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.863373 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.933597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.117097 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.210136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.210171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.210207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.210250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.210288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.210670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.211009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.211165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.211263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.211348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.211480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.213901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.215080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.351920 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.412428 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652"} Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.416657 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87"} Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.430274 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.437261 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.437763 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.211468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.211711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.211847 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.211932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.211973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.212105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.212165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.212251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.212368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.212449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.212656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.212990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.213210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.213483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.213596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.213671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.213710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.213891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.213941 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.215038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.215243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.215475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.215666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.215937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.217024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.217081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.217127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.217185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.217232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.217599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.217713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.217894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.267356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.438392 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.438476 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.011508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.092727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.152017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.195546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.213360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.213514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.213593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.213678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.214118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.214256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.215975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.255546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.321627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.379498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.438071 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.438229 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.445266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.466404 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9"} Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.507118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.587255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.647267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.695147 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.760623 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.811756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.859250 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.885758 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.885883 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.885900 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.885920 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.885949 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:52Z","lastTransitionTime":"2025-08-13T19:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.899433 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.923967 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.206752 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.209838 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.209926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.210160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.210268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.210374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.217041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.217102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.248569 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.289767 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.291489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.307261 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.307383 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.307400 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.307423 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.307515 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:53Z","lastTransitionTime":"2025-08-13T19:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.337623 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.338296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.349062 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.349112 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.349212 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.349235 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.349255 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:53Z","lastTransitionTime":"2025-08-13T19:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.368420 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.383148 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.391449 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.391586 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.391609 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.391635 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.391668 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:53Z","lastTransitionTime":"2025-08-13T19:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.416399 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.425267 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.447697 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.447935 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.447973 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.448006 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.448058 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:53Z","lastTransitionTime":"2025-08-13T19:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.455358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.455563 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.455621 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.482859 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.482984 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.512518 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa"} Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.525334 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.605677 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.643417 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.707289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.770990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.829555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.870005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.917092 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.985387 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.036606 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.082678 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.117446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.178301 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.209548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210853 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.211035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.246101 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.317140 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.451056 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.451358 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.592488 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.667318 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.667770 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.668370 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.668440 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.668464 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.759257 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.912115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.013890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.053062 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:48Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ce15d141220317b4e57b1599c379e880d26b45054aa1776fbad6346dd58a55d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce15d141220317b4e57b1599c379e880d26b45054aa1776fbad6346dd58a55d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.126303 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.212486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.220343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213589 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213844 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.215977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.216026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.216725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.216981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.218251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.218517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.218536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.219140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.219373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.226435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.226770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.227134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.227238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.227597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.228215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.229348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.229609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.229683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.229741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.230209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.230364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.230512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.230632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.230749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.231684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.231900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.231995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.233340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.233730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.234090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.234336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.235061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.235377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.235902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.236745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.237080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.326347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.368346 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.378972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.447613 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.447956 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.461126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.684736 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.684974 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.685010 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.685044 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.685211 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.686283 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.686525 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.686549 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.687433 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.68728321 +0000 UTC m=+438.379948088 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.686578 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.687735 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.687708552 +0000 UTC m=+438.380373180 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.687758 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.688567 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.688554646 +0000 UTC m=+438.381219274 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.689384 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.689355569 +0000 UTC m=+438.382020177 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.689480 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.689461712 +0000 UTC m=+438.382126400 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.689586 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.689672 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.689703 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.689734 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.690132 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.690697 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.690684417 +0000 UTC m=+438.383349035 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.690422 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.691188 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.691306 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.691526 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.690473 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.690501 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.690309 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.691393 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.691463 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.692606 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.692594162 +0000 UTC m=+438.385259040 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.692992 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.692982133 +0000 UTC m=+438.385646721 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.693124 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.693110486 +0000 UTC m=+438.385775084 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.693216 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.693206309 +0000 UTC m=+438.385870897 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.693320 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.693310182 +0000 UTC m=+438.385974780 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.693482 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.693615 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.693733 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.692076 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694048 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694260 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694307 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.69429632 +0000 UTC m=+438.386960938 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694471 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.694459495 +0000 UTC m=+438.387124123 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694504 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694661 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.694651941 +0000 UTC m=+438.387316559 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694740 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.695008 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.69499471 +0000 UTC m=+438.387659438 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694659 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.695038 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.695087 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.695074473 +0000 UTC m=+438.387739331 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694083 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.695126 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.695118654 +0000 UTC m=+438.387783252 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.694214 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695186 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695230 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695257 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695351 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695458 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695484 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695516 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.695976 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696095 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696244 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696283 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696457 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696587 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696370 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696411 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696926 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697436 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.69742201 +0000 UTC m=+438.390086608 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697498 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697486782 +0000 UTC m=+438.390151370 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697518 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697510942 +0000 UTC m=+438.390175660 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697534 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697527913 +0000 UTC m=+438.390192591 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697557 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697551163 +0000 UTC m=+438.390215761 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697572 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697565264 +0000 UTC m=+438.390229852 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697588 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697581864 +0000 UTC m=+438.390246542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697615 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697604985 +0000 UTC m=+438.390269573 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.801620 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.802319 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.802378 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.802669 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.803059 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.803083 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.80273274 +0000 UTC m=+438.495397548 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.803576 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.803550453 +0000 UTC m=+438.496215061 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.803959 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.804112 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.804408 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.804581 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.806009 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.806293 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.807717 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.807922 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.807895747 +0000 UTC m=+438.500560485 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.807996 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808038 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.808028671 +0000 UTC m=+438.500693289 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808087 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808123 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.808114633 +0000 UTC m=+438.500779241 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808166 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808193 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.808185625 +0000 UTC m=+438.500850493 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808366 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808401 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.808393271 +0000 UTC m=+438.501057889 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808947 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808987 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.808978388 +0000 UTC m=+438.501643106 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.802598 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.811435 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.812129 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.810984025 +0000 UTC m=+438.504770305 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.851263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915187 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915256 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915308 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915371 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915425 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915455 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915543 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915572 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915604 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915635 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915666 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915700 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915735 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915788 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915875 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915969 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916017 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916084 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916114 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916155 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916190 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916218 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916243 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916308 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916360 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916390 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916415 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916449 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916473 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916497 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916565 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916592 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916624 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916653 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916680 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916784 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.917316 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.917842 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.917419657 +0000 UTC m=+438.610084385 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.917952 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918000 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.917980323 +0000 UTC m=+438.610645041 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918051 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918079 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918071946 +0000 UTC m=+438.610736554 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918120 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918154 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918144568 +0000 UTC m=+438.610809406 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918200 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918232 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.91822545 +0000 UTC m=+438.610890048 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918266 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918290 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918282462 +0000 UTC m=+438.610947060 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918348 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918374 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918367454 +0000 UTC m=+438.611032052 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918438 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918459 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918474 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918515 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918504678 +0000 UTC m=+438.611169286 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918581 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918594 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918602 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918629 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918622042 +0000 UTC m=+438.611286750 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918674 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918705 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918696024 +0000 UTC m=+438.611360722 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918755 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918785 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918778266 +0000 UTC m=+438.611443244 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918990 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919008 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919017 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919055 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.919045494 +0000 UTC m=+438.611710122 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919418 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919440 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919450 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919506 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919551 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919628 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919640 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919648 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919716 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919855 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919869 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919902 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.919887628 +0000 UTC m=+438.612552236 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.919906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919946 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919979 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.91997055 +0000 UTC m=+438.612635248 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920023 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920052 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920045362 +0000 UTC m=+438.612709970 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920094 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920129 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920121635 +0000 UTC m=+438.612786233 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920175 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920213 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920201277 +0000 UTC m=+438.612865885 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920266 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920278 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920291 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.920324 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920327 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.9203158 +0000 UTC m=+438.612980508 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920369 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920404 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920389642 +0000 UTC m=+438.613054250 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920456 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920484 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920477535 +0000 UTC m=+438.613142243 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920526 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920561 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920553607 +0000 UTC m=+438.613218215 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.920637 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.920878 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921058 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921091 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921082302 +0000 UTC m=+438.613746910 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921133 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921168 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921155184 +0000 UTC m=+438.613823082 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921053 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921211 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921239 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921232896 +0000 UTC m=+438.613897604 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921276 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921304 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921297218 +0000 UTC m=+438.613961826 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921345 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921372 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.92136588 +0000 UTC m=+438.614030488 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921401 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921389241 +0000 UTC m=+438.614053829 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921418 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921411411 +0000 UTC m=+438.614075999 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921433 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921425802 +0000 UTC m=+438.614090460 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921447 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921441252 +0000 UTC m=+438.614105840 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921492 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921522 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921515944 +0000 UTC m=+438.614180542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921573 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921602 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921595287 +0000 UTC m=+438.614259895 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921648 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921683 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921668739 +0000 UTC m=+438.614333337 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921722 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921762 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921752631 +0000 UTC m=+438.614417239 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922122 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921977838 +0000 UTC m=+438.614642706 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922324 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922367 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.922356188 +0000 UTC m=+438.615020806 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922417 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922446 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.922438651 +0000 UTC m=+438.615103379 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922525 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922539 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922555 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922594 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.922585945 +0000 UTC m=+438.615250683 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922617 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.922609526 +0000 UTC m=+438.615274124 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922653 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922736 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.922672687 +0000 UTC m=+438.615337305 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923004 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923061 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923136 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923200 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923270 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923326 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923393 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923468 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923504 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923532 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923557 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923601 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923630 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923654 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923697 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923725 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923758 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.924024 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.924055 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.924098 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924200 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924221 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924233 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924263 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924254833 +0000 UTC m=+438.616919451 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924374 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924389 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924402 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924432 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924423368 +0000 UTC m=+438.617088096 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924472 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924490 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924497 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924506 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924514 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924521 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924551 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924543811 +0000 UTC m=+438.617208429 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924591 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924657 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924713 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924566032 +0000 UTC m=+438.617230620 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924743 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924734136 +0000 UTC m=+438.617398854 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924760 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924752707 +0000 UTC m=+438.617417415 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924778 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924871 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.92485985 +0000 UTC m=+438.617524458 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924897 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924932 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924953 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924962 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924976 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925029 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925098 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925111 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925118 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925145 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925163 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925171 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925191 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925207 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925232 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925266 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924935 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924927252 +0000 UTC m=+438.617591980 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937221 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937200363 +0000 UTC m=+438.629864961 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937243 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937234514 +0000 UTC m=+438.629899102 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937259 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937253364 +0000 UTC m=+438.629918072 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937274 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937267495 +0000 UTC m=+438.629932093 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937299 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937288645 +0000 UTC m=+438.629953233 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937321 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937312466 +0000 UTC m=+438.629977054 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937344 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937336327 +0000 UTC m=+438.630000925 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937365 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937358707 +0000 UTC m=+438.630023305 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925301 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937573 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925337 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925384 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925434 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925468 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924743 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925505 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925625 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925665 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.928645 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937994 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937974755 +0000 UTC m=+438.630639383 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938034 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938027156 +0000 UTC m=+438.630691774 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938059 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938074 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938427 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938410227 +0000 UTC m=+438.631074965 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938568 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938559662 +0000 UTC m=+438.631224370 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938582 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938575782 +0000 UTC m=+438.631240490 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938597 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938590642 +0000 UTC m=+438.631255360 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938446 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938612 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938637 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938630694 +0000 UTC m=+438.631295302 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938463 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938654 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938688 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938679735 +0000 UTC m=+438.631344463 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938714 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938742 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938735957 +0000 UTC m=+438.631400585 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.025592 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.025701 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026004 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026066 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026101 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026156 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026183 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026208 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026242 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026292 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026321 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026379 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026429 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026618 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026671 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026699 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027027 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027185 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027270 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027410 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027569 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027669 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027719 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027763 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027904 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027971 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028015 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028079 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028156 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028187 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028265 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028469 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028579 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.028960 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029188 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.029166811 +0000 UTC m=+438.721831769 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029301 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029321 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029334 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029371 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.029360627 +0000 UTC m=+438.722025315 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029494 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029538 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.029526441 +0000 UTC m=+438.722191140 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029607 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029621 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029631 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029675 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.029662255 +0000 UTC m=+438.722326953 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029736 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029780 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.029768928 +0000 UTC m=+438.722433606 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029966 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030014 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030001685 +0000 UTC m=+438.722666503 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030070 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030109 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030092478 +0000 UTC m=+438.722757176 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030173 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030188 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030200 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030236 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030224091 +0000 UTC m=+438.722888779 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030290 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030327 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030317534 +0000 UTC m=+438.722982232 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030379 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030419 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030409057 +0000 UTC m=+438.723073755 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030467 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030515 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030503499 +0000 UTC m=+438.723168187 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030565 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030608 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030597342 +0000 UTC m=+438.723262030 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030673 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030691 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030702 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030744 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030725956 +0000 UTC m=+438.723390654 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.031240 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052080 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052060916 +0000 UTC m=+438.744725524 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.031395 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052123 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052114667 +0000 UTC m=+438.744779285 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.031469 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052146 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052160 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052194 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052186179 +0000 UTC m=+438.744850787 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.031518 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052217 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052225 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052249 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052242381 +0000 UTC m=+438.744906989 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.031562 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052270 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052277 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052299 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052293622 +0000 UTC m=+438.744958230 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038112 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052325 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052353 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052346404 +0000 UTC m=+438.745011012 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038147 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052392 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052384525 +0000 UTC m=+438.745049133 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038188 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052415 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052423 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052752 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052446107 +0000 UTC m=+438.745110715 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038222 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053401 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053413 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053450 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053438615 +0000 UTC m=+438.746103293 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038271 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053477 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053486 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053527 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053516757 +0000 UTC m=+438.746181375 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038317 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053551 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053562 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053596 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053585829 +0000 UTC m=+438.746250447 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038353 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.0536242 +0000 UTC m=+438.746288818 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038405 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053669 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053660101 +0000 UTC m=+438.746324789 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038456 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053702 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053711 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053738 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053729263 +0000 UTC m=+438.746393961 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038493 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053783 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053774815 +0000 UTC m=+438.746439503 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038522 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054154 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054143315 +0000 UTC m=+438.746808003 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038554 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054196 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054186946 +0000 UTC m=+438.746851634 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039152 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054221 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054233 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054265 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054256728 +0000 UTC m=+438.746921436 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039202 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054292 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054302 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054334 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.05432556 +0000 UTC m=+438.746990238 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039241 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054375 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054366301 +0000 UTC m=+438.747030989 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039293 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054394 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054402 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054423 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054417813 +0000 UTC m=+438.747082431 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039337 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054442 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054449 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054478 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054470504 +0000 UTC m=+438.747135142 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039593 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054517 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054508025 +0000 UTC m=+438.747172713 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.063362 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.063190003 +0000 UTC m=+438.755854841 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.069783 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.131035 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.133448 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.133513 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.133528 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.134696 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.135692 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.137675 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.137709 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.135741 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.135714826 +0000 UTC m=+438.828379424 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.138024 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.137987071 +0000 UTC m=+438.830652129 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.141418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.142038 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.142077 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.142251 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.142238353 +0000 UTC m=+438.834903071 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.184985 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.209403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.209674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.209756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.209772 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.209896 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.209993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.210313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.210545 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.210593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.210694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.210887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.210987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.211124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.211261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.247521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.293759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.333889 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.391443 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.433995 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.434142 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.434338 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.557632 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9"} Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.656619 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.900761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.113489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.215207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.215482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.215547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.215670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.215720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.215852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.215904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.215986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.216031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.216161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.216364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.216500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.216553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.216629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.216672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.216746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.218157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.218341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.218918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.219295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.219538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.219632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.219689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.219786 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220291 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220778 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.225035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.227431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.227920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.227983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229856 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.230055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.253521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.330164 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.401170 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.446947 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.447375 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.468128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.495680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.528711 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.556147 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.595352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.677505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.724467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.802921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.845900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.891453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.932938 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.009762 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.134739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.208763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.208894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.209672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.209929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.210115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.209000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.210244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.209027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.210396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.209045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.210512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.210612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.286195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.322688 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.366602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.400446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.436993 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.437129 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.438613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.475304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.508161 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.537058 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.573022 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87" exitCode=0 Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.573114 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87"} Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.574289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.600757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.628072 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.649170 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.686045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.715759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.748028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.770996 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.797042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.827005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.871950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.905761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.947086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.974070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.020358 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.075759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.129960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.169723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.208595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.208730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.208611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.209025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.208681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.209233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.209516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.209560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.210147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.210308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.210648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.214018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.214178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.214580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.214738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.214984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.215330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.215544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.215712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.218250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.218479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.218842 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.218910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.218985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.218979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.219080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.219273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.219313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.219361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.219393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.219427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.219398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.219754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.220085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.220336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.220945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221777 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.222194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.222285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.222373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.224393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.224581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.225192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.225307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.225576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.237571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.291390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}],\\\"phase\\\":\\\"Pending\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.322467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.350143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.406518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.435894 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.436017 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.445968 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.473691 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.501036 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.531341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.613399 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.652141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.686485 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.728089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.758686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.806360 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.848123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.894343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.940165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.991706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.075096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.136959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.192562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.208568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.208644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.208759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.208927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.208975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.209059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.209261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.209411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.209476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.209605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.209662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.209872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.210013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.210108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.233030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.264467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.293098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.322323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.361048 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.378410 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.391430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.425056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.433337 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.433920 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.461912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.525496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.552112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.579068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.606494 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf" exitCode=0 Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.606575 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf"} Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.618186 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6"} Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.622722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.658214 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.694858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.734452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.807626 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.833256 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.858683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.901316 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.932641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.961318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.999348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.036401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.063490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.114161 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.147094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.179297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.210085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.210305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.210376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.210461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.210557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.210664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.210863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.211233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.211513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.211704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.212231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212778 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.213546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.213945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.214181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.214361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.214524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.214697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.215125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.215512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.215660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.216049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.216562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.217613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.220108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.220255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.250750 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.281051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.309018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.343007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.370518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.398015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.457964 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.458680 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.468953 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.532073 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.613927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.659184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.715514 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.766423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.805048 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.837733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.880652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.923471 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.001616 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.034109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.065888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.108057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.148203 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.175307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.199300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.209913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.209963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.209926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.210072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.210176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.210209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.210538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.232958 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.262201 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.284102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.312208 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.346709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.374514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.425160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.432559 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.432655 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.469506 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.498331 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.535115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.562049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.613405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.652239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.658277 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6" exitCode=0 Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.658379 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6"} Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.074150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.128730 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.191086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.208863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209391 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210874 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.211211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.211507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.211719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.211932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.211981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.212065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.212105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212879 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.219195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.258186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.320661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.353456 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.385083 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.415414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.437119 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.437734 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.472669 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.516271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.561183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.599609 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.648547 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.701203 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.738746 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.739324 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561"} Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.740385 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.740622 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.751859 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.864463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.865516 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.865560 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.865572 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.865613 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.865695 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:03Z","lastTransitionTime":"2025-08-13T19:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.891201 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.910497 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.910561 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.910577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.910603 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.910637 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:03Z","lastTransitionTime":"2025-08-13T19:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.928645 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.969222 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.971533 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.980643 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.980700 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.980716 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.980741 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.980764 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:03Z","lastTransitionTime":"2025-08-13T19:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.000279 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014273 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014669 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014713 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014730 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014759 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014887 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:04Z","lastTransitionTime":"2025-08-13T19:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.036882 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.056893 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.058050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.060678 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.062055 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.062189 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.062748 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:04Z","lastTransitionTime":"2025-08-13T19:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.091697 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.091757 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.099907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.147366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.176675 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.193186 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209280 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.210215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.217704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.269547 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.280166 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.356140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.397459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.433765 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.434007 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.468167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.524495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.616197 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.650058 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.671362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.755020 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c"} Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.921579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:04.999980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.038347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.063112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.093964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.176644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.211310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.211463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.211895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.212496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.212203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.212586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.212712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.214366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.214469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.214581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.214629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.214691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.214775 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.214951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.215065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.215116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.215225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.215295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.215702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.216022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.216229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.216417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.216579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.216732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.217144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.219079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.219119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.229455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.350753 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.382071 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.420324 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.434598 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.434696 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.482096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.523588 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.590380 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.617719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.722247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.778175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.927874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.970170 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.065532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.125139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.208946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.209206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.209506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.209692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.209930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.210056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.210267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.432341 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.432441 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.209467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.209678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.209884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.211088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.211145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.211249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.211351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.211492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.211556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.211643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.211928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.212033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.212168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.212183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.212436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.212635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.212770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.213050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.213132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.213220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.213286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.213454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.213709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.213905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.213967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.214335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.214391 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.214700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.214986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.215162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.215388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.215572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.215764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.217396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.217551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.218303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.218644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.432438 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.432909 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.495766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.548358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.618118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.646137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.669107 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.692050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.738361 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.804098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.833114 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.862096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.898239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.939601 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.972192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.027572 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.061320 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.097478 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.127744 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.148870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.179521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.205912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.208590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.208723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.209048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.209366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.209570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.209623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.210068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.234059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.264989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.289536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.434084 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.434184 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.916143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.949487 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.984255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.013006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.051369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.076465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.108584 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.144491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.174097 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.207762 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.208675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.208911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209123 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209863 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.212121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.212552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.212738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.212899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.212926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.213192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.214023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.214114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.238434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.274247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.298652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.432176 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.432303 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.825172 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.850296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.889281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.918575 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.948323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.972630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.990956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.011964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}],\\\"phase\\\":\\\"Pending\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.059552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.112084 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.196681 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.210173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.210283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.210437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.210527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.210622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.210677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.210760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.210984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.211138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.211296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.211405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.212099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.212648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.243285 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.388547 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.444178 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.444296 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.530199 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.575148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.630057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.681587 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.721430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.774552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.814963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.846009 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c" exitCode=0 Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.846079 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c"} Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.850105 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.084491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.116248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.137138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.180652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.201934 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.210068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.210101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.210208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.209936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.209993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.210029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.211416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.212368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.211586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.214661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.218150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.218244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.218344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.218446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.218506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.219170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.223245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.223982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.238187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.252902 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" probeResult="failure" output="" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.261025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.281422 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.313526 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.340654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.369495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.389295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.407039 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.432526 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.432621 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.447233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.467723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.492706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.517337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.545198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.567309 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.586343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.606654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.631234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.649246 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.666476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.679672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.701056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.721531 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.742491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.759196 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.772995 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773082 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.773139 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.773192 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.773230 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.773207915 +0000 UTC m=+470.465872723 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.773255 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.773240246 +0000 UTC m=+470.465904964 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773293 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773347 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773379 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773409 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773437 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773472 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773508 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773544 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773589 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773720 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773865 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773915 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773989 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774072 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774120 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774147 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774179 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774213 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774641 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774705 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774732 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774757 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774866 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775040 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775115 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775096969 +0000 UTC m=+470.467761707 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775172 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775243 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775227413 +0000 UTC m=+470.467892221 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775298 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775329 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775321926 +0000 UTC m=+470.467986554 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775384 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775437 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775429169 +0000 UTC m=+470.468093897 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775477 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775502 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775496171 +0000 UTC m=+470.468160779 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775538 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775561 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775554363 +0000 UTC m=+470.468219071 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775620 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775633 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775645 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775672 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775664646 +0000 UTC m=+470.468329374 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775712 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775737 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775731128 +0000 UTC m=+470.468395856 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775858 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775897 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775887872 +0000 UTC m=+470.468552600 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775941 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775966 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775959624 +0000 UTC m=+470.468624552 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776001 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776023 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776017276 +0000 UTC m=+470.468682024 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776055 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776076 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776070067 +0000 UTC m=+470.468734795 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776111 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776137 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776130729 +0000 UTC m=+470.468795457 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776173 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776195 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776188091 +0000 UTC m=+470.468852789 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776247 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776260 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776286 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776278083 +0000 UTC m=+470.468942791 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776322 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776345 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776338965 +0000 UTC m=+470.469003663 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776379 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776401 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776394707 +0000 UTC m=+470.469059315 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776434 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776460 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776452058 +0000 UTC m=+470.469116776 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776489 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776514 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.77650713 +0000 UTC m=+470.469171738 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776547 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776571 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776565181 +0000 UTC m=+470.469229799 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776605 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776636 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776627503 +0000 UTC m=+470.469292141 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776920 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.777015 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.777002534 +0000 UTC m=+470.469667382 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776700 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.777065 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.777057855 +0000 UTC m=+470.469722583 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.782555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.803531 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.821089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.838261 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.855374 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be" exitCode=0 Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.855422 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be"} Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.871927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876014 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876098 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876125 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876314 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876341 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876373 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.877594 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.877687 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878534 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.878613548 +0000 UTC m=+470.571278276 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878685 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878713 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.878706221 +0000 UTC m=+470.571370829 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878750 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878855 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.878768702 +0000 UTC m=+470.571433430 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878901 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878927 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.878921087 +0000 UTC m=+470.571585705 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878963 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878985 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.878979138 +0000 UTC m=+470.571643756 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879015 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879036 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.8790307 +0000 UTC m=+470.571695318 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879070 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879185 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.879177064 +0000 UTC m=+470.571841682 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879233 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879311 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.879253696 +0000 UTC m=+470.571918324 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879362 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879416 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.879409461 +0000 UTC m=+470.572074189 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.904850 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.927889 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.946533 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.968362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979616 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979751 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979870 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979923 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979964 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979995 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980027 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980051 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980079 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980105 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980130 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980157 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980181 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980218 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980290 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980320 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980346 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980369 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980393 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980434 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980467 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980495 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980518 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980552 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980580 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980605 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980631 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980656 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980681 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980709 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980741 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980907 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980939 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980965 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980999 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981023 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981086 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981110 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981181 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981209 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981274 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981315 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981369 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981347424 +0000 UTC m=+470.674012162 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981383 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981394 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981383195 +0000 UTC m=+470.674047793 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981417 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981403316 +0000 UTC m=+470.674068034 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981463 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981502 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981495018 +0000 UTC m=+470.674159756 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981544 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981555 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981570 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.98156337 +0000 UTC m=+470.674228098 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981592 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981583301 +0000 UTC m=+470.674248019 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981603 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981628 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981621632 +0000 UTC m=+470.674286350 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981680 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981710 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981702304 +0000 UTC m=+470.674366932 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981726 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981747 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981863 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981766226 +0000 UTC m=+470.674430854 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981889 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981880149 +0000 UTC m=+470.674544887 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981912 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981936 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981974 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981951652 +0000 UTC m=+470.674616340 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981986 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981997 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981989143 +0000 UTC m=+470.674653741 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982011 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982004583 +0000 UTC m=+470.674669171 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982051 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982063 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982075 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982069145 +0000 UTC m=+470.674733873 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982077 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982097 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982110 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982131 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982124796 +0000 UTC m=+470.674789404 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982149 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982141687 +0000 UTC m=+470.674806275 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982165 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982196 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982186978 +0000 UTC m=+470.674851596 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982220 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982237 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982242 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982253 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982253 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982279 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982272091 +0000 UTC m=+470.674936889 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982298 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982290091 +0000 UTC m=+470.674954809 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982317 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982309542 +0000 UTC m=+470.674974130 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982334 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982345 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982359 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982368 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982376 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982365853 +0000 UTC m=+470.675030521 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982385 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982394 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982408 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982421 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982412705 +0000 UTC m=+470.675077323 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982425 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982439 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982431585 +0000 UTC m=+470.675096173 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982456 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982448116 +0000 UTC m=+470.675112904 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982458 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982488 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982493 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982487107 +0000 UTC m=+470.675151725 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982502 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982511 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982520 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982535 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982538 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982531998 +0000 UTC m=+470.675196726 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982490 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982582 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982617 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982632 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982640 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982659 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982685 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982723 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982732 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982866 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982881 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982894 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982881 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982934 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982936 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983119 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983123 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982371 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982944 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982224 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982984 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982966241 +0000 UTC m=+470.675630979 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983324 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.98331422 +0000 UTC m=+470.675978929 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983343 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983335961 +0000 UTC m=+470.676000549 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983052 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983362 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983355932 +0000 UTC m=+470.676020530 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983384 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983376992 +0000 UTC m=+470.676041590 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983406 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983391963 +0000 UTC m=+470.676056671 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983425 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983417733 +0000 UTC m=+470.676082411 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983479 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983434204 +0000 UTC m=+470.676135313 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983503 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983494216 +0000 UTC m=+470.676158894 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983517 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983510786 +0000 UTC m=+470.676175474 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983534 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983525877 +0000 UTC m=+470.676190475 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.983574 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984142 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984428 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984456 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984496 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984552 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984591 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984626 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984663 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984689 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984766 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984875 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984917 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984953 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984979 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985006 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985047 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985073 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985097 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985153 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985190 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985214 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985252 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985516 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985530 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985539 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985572 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985562785 +0000 UTC m=+470.678227523 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985649 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985693 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985685668 +0000 UTC m=+470.678350366 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985872 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985705159 +0000 UTC m=+470.678369837 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985898 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985889384 +0000 UTC m=+470.678554092 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985912 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985906435 +0000 UTC m=+470.678571143 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985928 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985920805 +0000 UTC m=+470.678585513 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985942 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985935475 +0000 UTC m=+470.678600073 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985992 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986006 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986015 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986042 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986035038 +0000 UTC m=+470.678699776 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986085 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986097 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986106 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986130 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986123101 +0000 UTC m=+470.678787829 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986166 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986190 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986183532 +0000 UTC m=+470.678848140 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986232 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986242 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986251 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986275 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986268455 +0000 UTC m=+470.678933443 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986309 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986330 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986324746 +0000 UTC m=+470.678989465 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986369 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986379 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986388 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986412 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986405849 +0000 UTC m=+470.679070587 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986447 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986470 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.98646266 +0000 UTC m=+470.679127368 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986512 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986521 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986545 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986539113 +0000 UTC m=+470.679203731 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986581 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986604 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986598654 +0000 UTC m=+470.679263272 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987126 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987170 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987182 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987214 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987205112 +0000 UTC m=+470.679869720 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987251 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987276 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987269594 +0000 UTC m=+470.679934322 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987307 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987333 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987322555 +0000 UTC m=+470.679987173 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987383 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987395 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987402 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987422 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987445 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987465 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987477 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987485 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987500 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987524 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987530 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987535 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987543 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987555 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987620 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987645 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987669 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987427 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987419648 +0000 UTC m=+470.680084376 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987699 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987689366 +0000 UTC m=+470.680353954 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987719 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987709946 +0000 UTC m=+470.680374534 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987735 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987727907 +0000 UTC m=+470.680392495 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987747 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987741927 +0000 UTC m=+470.680406515 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987760 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987755017 +0000 UTC m=+470.680419615 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987895 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987769398 +0000 UTC m=+470.680544489 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987917 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987910832 +0000 UTC m=+470.680575420 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987935 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987929572 +0000 UTC m=+470.680594160 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.990337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.009479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.027937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.052380 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.071366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087352 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087422 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087450 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087525 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087549 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087577 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087611 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087634 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.088152 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.088178 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.088224 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.088270 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.088342 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088503 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088526 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088550 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088603 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.088587829 +0000 UTC m=+470.781252447 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088660 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088671 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088678 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088703 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.088696492 +0000 UTC m=+470.781361110 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088743 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088768 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.088760304 +0000 UTC m=+470.781424912 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088910 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088940 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.088932999 +0000 UTC m=+470.781597607 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088972 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088999 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.088990601 +0000 UTC m=+470.781655399 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089045 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089076 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089067713 +0000 UTC m=+470.781732651 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089123 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089153 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089144575 +0000 UTC m=+470.781809373 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089212 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089249 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089238618 +0000 UTC m=+470.781903326 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089257 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089323 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.089353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089356 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089341771 +0000 UTC m=+470.782006499 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089388 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089413 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089406333 +0000 UTC m=+470.782070951 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089456 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089471 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.089476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089482 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089517 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089506105 +0000 UTC m=+470.782170774 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089546 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089557 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089566 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089592 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089584738 +0000 UTC m=+470.782249366 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.089613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.089639 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.094238 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.094322 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.094349 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.094424 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.094470 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.094431806 +0000 UTC m=+470.787096604 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095057 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.095038014 +0000 UTC m=+470.787702712 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095237 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095257 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095269 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095319 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.095306281 +0000 UTC m=+470.787970969 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095621 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096503 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096536 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096554 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096700 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096881 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096739 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.096719552 +0000 UTC m=+470.789384230 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.097979 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.098004 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.098050 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.098317 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.098408 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.097172025 +0000 UTC m=+470.789836673 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.098479 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.098952 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.098980 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.099099 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.098925045 +0000 UTC m=+470.791589653 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.099133 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.09912044 +0000 UTC m=+470.791785028 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.099155 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.099143831 +0000 UTC m=+470.791808469 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.099730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.099711007 +0000 UTC m=+470.792375715 (durationBeforeRetry 32s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.100349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.100468 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.101294 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.10120308 +0000 UTC m=+470.793867898 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.101298 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.105427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.106115 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.106400 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.106885 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.106924 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.106954 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.106967 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107034 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.107016596 +0000 UTC m=+470.799681254 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.101617 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107087 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.107079328 +0000 UTC m=+470.799743946 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107306 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107323 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107332 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107370 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.107360976 +0000 UTC m=+470.800025594 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107470 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107502 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107516 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107546 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107566 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107578 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.107556381 +0000 UTC m=+470.800221119 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107933 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.107920772 +0000 UTC m=+470.800585370 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.109002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.109257 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.109438 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109595 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109643 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.109632791 +0000 UTC m=+470.802297409 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.109746 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109751 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109766 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109860 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.109881 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109907 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.109895198 +0000 UTC m=+470.802559826 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109945 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.110001 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.110014 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.110033 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.110024812 +0000 UTC m=+470.802689430 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.110110 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.110119 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.110207 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.110188447 +0000 UTC m=+470.802853275 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.110232 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.110223158 +0000 UTC m=+470.802887826 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.110482 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111062 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.111252 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111443 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.111331929 +0000 UTC m=+470.803996717 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111509 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111726 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.11170629 +0000 UTC m=+470.804370998 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111887 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111922 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111936 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111993 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.111979728 +0000 UTC m=+470.804644426 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.112028 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.112078 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.1120664 +0000 UTC m=+470.804731068 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.115026 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.115381 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.115405 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.115426 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.115488 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.115469188 +0000 UTC m=+470.808134116 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.125390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.160298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.177964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.207374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.209503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.209729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.209917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.210030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.210230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.210293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.210451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.210521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.210637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.210695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.210968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.211039 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.211142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.218271 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.219298 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.219700 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.219926 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220010 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220065 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220121 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.220103548 +0000 UTC m=+470.912768326 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.219955 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220153 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220166 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220201 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.220189081 +0000 UTC m=+470.912853879 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220020 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.219741 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.222958 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.220358935 +0000 UTC m=+470.913023683 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.233442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.251069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.267173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.285887 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.303890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.318966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.336464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.351733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.367608 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.392973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.412932 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.426069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.432979 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.433088 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.443656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.462753 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.490319 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.503713 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.522402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.545945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.561971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.580005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.603304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.624736 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.643679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.662195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.678466 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.698843 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.715483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.731612 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.747235 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.772334 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.797919 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.814680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.832541 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.851410 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.865155 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f"} Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.875411 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.895211 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.912060 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.938242 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.954307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.971114 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.989738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.004040 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.023087 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.045277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.057881 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.080718 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.096717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.111548 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.129164 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.141299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.162088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.180579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.195850 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209846 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209860 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210848 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.211088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.211114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.211236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.212029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.212744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.213058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.216768 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.241924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.260389 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.275694 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.302278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.337870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.377087 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.417746 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.431914 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.431982 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.463690 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.501359 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.539684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.578988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.616479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.657338 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.700046 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.738332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.778629 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.830527 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.881299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.918914 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.945133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.977665 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.022864 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.057453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.106942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.139392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.176210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.209527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.209746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.209973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.210047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.210250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.223399 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.258626 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.302991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.314875 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.315148 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.315248 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.315374 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.315496 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:14Z","lastTransitionTime":"2025-08-13T19:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.335936 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.339166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.341630 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.341916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.342086 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.342240 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.342413 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:14Z","lastTransitionTime":"2025-08-13T19:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.360299 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.365747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.365857 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.365874 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.365893 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.365920 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:14Z","lastTransitionTime":"2025-08-13T19:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.386918 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391526 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391548 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391559 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391601 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:14Z","lastTransitionTime":"2025-08-13T19:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.409225 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.413737 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.413917 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.413941 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.413976 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.414015 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:14Z","lastTransitionTime":"2025-08-13T19:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.421178 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.432215 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.432302 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.432905 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.432958 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.457277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.497870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.541958 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.584955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.628287 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.667723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.701660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.738179 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.813600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.834432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.860175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.901732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.946650 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.977707 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.018738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.056296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.096667 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.140752 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.188368 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.208970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.209502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.209632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.209744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.211148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.211287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.211435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.211600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.211897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.212054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.212097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.212212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.212303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.212669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.212947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.213080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.213485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.216020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.216186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.216344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.216579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.217003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.217091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.217174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.217253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.217336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.224061 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.285058 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.389688 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.432067 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.432156 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.510596 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.669903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.828137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.871104 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.984308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.033579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.064140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.211036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.211185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.211303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.211450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.211569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.210361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.212506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.273030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.338308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.363562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.385404 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.404099 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.421513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.432631 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.432723 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.440407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.474394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.494576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.515876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.534547 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.555438 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.573903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.617191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.637756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.655413 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.671890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.703082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.727226 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.745717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.760621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.775566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.810502 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.833505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.850015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.867168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.891755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.904583 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" event={"ID":"2b6d14a5-ca00-40c7-af7a-051a98a24eed","Type":"ContainerStarted","Data":"572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453"} Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.908658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.924159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.939621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.956049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.971838 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.991393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.005948 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.021381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.040133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.056056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.072695 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.099019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.139018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.176695 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209477 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.208992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.211311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.211460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.211642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.211768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.211923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212220 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.212362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.212957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.213292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.220088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.258336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.300519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.339115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.380173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.421761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.431862 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.431965 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.456697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.498064 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.548357 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.577003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.622515 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.660262 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.698527 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.739460 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.776891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.817708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.859649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.896351 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.940258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.977728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.020480 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.058168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.103326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.143005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.176901 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.208482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.208751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.208989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.209134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.209215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.209293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.209381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.220355 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.260275 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.298617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.339464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.381536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.418447 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.432713 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.432906 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.460858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.498501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.542007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.580348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.617100 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.657993 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.698140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.738018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.786119 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.818503 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.856961 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.899682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.941592 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.975461 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.019726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.060018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.098167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.138641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.177624 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.208497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.208698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210589 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.209764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211646 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.212704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.213072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.214075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.214192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.214975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.215357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.215365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.220908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.259021 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.300585 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.338432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.380912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.419832 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.431902 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.432320 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.458858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.500419 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.537872 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.577374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.623367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.766701 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.790947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.820256 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.850595 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.874545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.892241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.909876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.938441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.977522 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.017315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.057197 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.098237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.139069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.184590 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.208428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.208644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.208943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.209023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.209135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.209309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.209388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.209491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.209568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.209866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.209964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.210192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.218435 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.259151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.296561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.340897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.379992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.392194 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.423043 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.432402 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.432498 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.458549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.500177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.541717 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.579589 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.623195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.658644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.703630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.739581 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.777685 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.817014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.864239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.900465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.938106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.979222 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.024705 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.208501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.208686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.208965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.211038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.211115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.211215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.211286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.211368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.211400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.211420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.211916 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.212407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.212405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.212591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.212704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.212866 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.213012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.213121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.213365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.213878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.214027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.214129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.215254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.214340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.215363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.218528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.218959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.219304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.219398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.219540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.432517 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.433903 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.209489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.209625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.209750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.209970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.210025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.210097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.210157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.432099 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.432193 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.208882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210852 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.212247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.212417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212980 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.213360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.213441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.213534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.431706 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.431872 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.208756 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.208908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.210135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.210392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.208988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.209064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.209560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.210555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.210663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.211045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.211196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.432906 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.433026 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.639317 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.639385 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.639401 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.639421 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.639447 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:24Z","lastTransitionTime":"2025-08-13T19:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.653677 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.658767 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.659077 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.659184 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.659297 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.659402 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:24Z","lastTransitionTime":"2025-08-13T19:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.674016 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.679322 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.679390 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.679493 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.679525 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.679655 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:24Z","lastTransitionTime":"2025-08-13T19:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.696555 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.701721 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.701824 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.701844 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.701862 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.702191 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:24Z","lastTransitionTime":"2025-08-13T19:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.716616 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.721700 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.721751 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.721765 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.721853 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.721878 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:24Z","lastTransitionTime":"2025-08-13T19:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.738284 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.738362 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.209122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.209285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.209431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.209486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.209610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.209665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.209754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.209911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.212010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.212331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.212542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.214074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.214165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.214261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.214344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.231529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.249615 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.265732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.279593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.295707 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.313038 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.328375 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.345296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.367307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.383495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.393295 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.400683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.416499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.432475 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.432588 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.433335 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.457061 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.474546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.490258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.509655 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.527202 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.545919 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.565131 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.580255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.613675 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.629380 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.649561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.666564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.685427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.707308 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.724742 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.746955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.766518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.785331 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.804706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.828198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.844508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.862140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.880048 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.895446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.920745 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.948183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.982439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.005452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.022371 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.040235 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.061654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.084113 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.099721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.116106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.131947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.146928 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.170229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.183011 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.196946 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.208228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.208400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.208611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.208720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.209013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.209112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.209215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.215950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.232440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.249114 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.272082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.288483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.305123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.320452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.337512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.352419 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.368181 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.386370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.402988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.421871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.432242 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.432343 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.440943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.456318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.208421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.208564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.208920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.209101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.209156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.209281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.209417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.209548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.210298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.210377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.210380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.210742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210849 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.210905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.211224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.211224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.211271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.211558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.211601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.212178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.212219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.212958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.214096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.214155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.214326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.214492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.433882 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.434002 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.208441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.208712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.208849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.208877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.209376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.433038 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.433173 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.209317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.209432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.209453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.209544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.209598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.209754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211865 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.212083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.212238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.212959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.213383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.214056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.214112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.214205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.433432 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.433544 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.208156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.208441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.208659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.208879 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.209053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.209207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.209363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.209462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.209604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.209703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.210018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.210073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.210134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.210415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.395481 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.433977 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.434108 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.210043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.210087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.210059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.211733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.212161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.212282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.212405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.212638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.212946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.213727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.213927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.214056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.214216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.214272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.214361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.214510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.214622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.214923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.214985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.215069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.215202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.215259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.215348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.215509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.215566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.215643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.215874 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.215993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.216189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.216210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.216295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.216351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.216526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.216721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.217008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.217222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.217378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217381 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217454 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.217544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.217587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217835 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217842 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.218947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.219121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.219208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.219595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.219844 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.221030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.221281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.221190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.223027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.223166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.223293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.433089 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.433191 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.209278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.209672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.209671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.209757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.209926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.210010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.210071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.432598 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.432690 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208948 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.209173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.209311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.209604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.209707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210015 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.211037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.211052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.211287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.211352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.211542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.211690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.211951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.212139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.212183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.212387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.212410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.212610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.213666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.214039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.214134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.214259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.214661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.214991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.216087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.216168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.216359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.433117 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.433221 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.433364 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.433469 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.869755 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.870279 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.870328 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.870375 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.870426 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:34Z","lastTransitionTime":"2025-08-13T19:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.893462 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:34Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.899691 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.899726 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.899738 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.899756 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.899874 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:34Z","lastTransitionTime":"2025-08-13T19:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.914523 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:34Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.919409 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.919485 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.919505 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.919530 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.919560 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:34Z","lastTransitionTime":"2025-08-13T19:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.935607 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:34Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.941412 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.941546 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.941570 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.941596 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.941625 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:34Z","lastTransitionTime":"2025-08-13T19:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.956460 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:34Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.962061 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.962156 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.962179 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.962222 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.962253 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:34Z","lastTransitionTime":"2025-08-13T19:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.977525 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:34Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.977593 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.210033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.211929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.212018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.212086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.212115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.212178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.212394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.212542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.212872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.213083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.213197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.214420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.214666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.214902 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.214981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.215211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.215298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.215410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.216244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.216373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.216549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.217173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.217422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.217658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.217966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.218189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.218282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.218360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.218536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.218649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.218707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.219276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.219469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.219520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.219617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.219994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.220157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.220258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.220627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.220747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.233218 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.248274 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.262142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.282086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.300956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.326733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.344253 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.361191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.376080 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.390425 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.398056 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.413960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.430656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.432073 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.432149 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.448265 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.464224 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.484653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.509143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.523885 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.539186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.553393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.574891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.598559 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.615859 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.633722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.653005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.669221 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.684397 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.700524 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.716653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.735922 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.752281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.772240 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.795576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.811142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.826870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.846876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.864673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.880239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.895552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.910379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.926482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.941238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.955151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.971067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.000906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.017406 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.035327 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.050655 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.065579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.101752 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.172718 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.187702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.208749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.209059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.209097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.209192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.209365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.209467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.209597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.210357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.212151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.227295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.242123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.256967 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.267353 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.282965 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.298417 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.317515 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.334164 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.351543 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.368673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.385298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.399928 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.417266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.431895 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.432192 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.432671 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.448930 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.211018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.211131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.211949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.212078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.212132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.213214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.213855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.214038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.214174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.437644 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.437841 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.208923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.431243 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.431333 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.208950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209858 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.211032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.211231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.211393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.211532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.211752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.212101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.212250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.212355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.212549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.212686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.212874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.213252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.213714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.214298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.214402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.433011 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.433108 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.208919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.209019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.209078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.208918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.209229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209444 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.267242 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" probeResult="failure" output="" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.400725 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.432900 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.433040 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.209303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.209517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.209361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.209917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.209926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.210465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.210563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.210730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.210970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.211070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.211293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.211479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.211664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.211981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.212386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.212414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.212506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.212523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.212627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.212649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.212695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.212998 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.213161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.213983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.214065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.214371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.214483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.214537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.213769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.214772 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.215259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215339 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215438 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.215564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.215699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.215970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.216040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.216140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.216485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.216533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.216925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.216931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.217009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.217218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.217573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.217929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.436074 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.436377 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.433429 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.433547 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.208285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.208475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.208512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.208725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.208746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.208903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.208947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.210441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.210603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.210718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.210999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.211173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.211416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.211511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.211603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.211721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212865 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.213129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.213357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.214050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.214138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.214894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.215137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.432240 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.432343 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.798252 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.798372 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.798525 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.798622 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.798600749 +0000 UTC m=+534.491265497 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.798951 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799012 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799091 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799126 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799152 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799180 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799213 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799249 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799301 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799388 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799727 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799509 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799549 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799567 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799602 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799598 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799597 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799630 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799653 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799659 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799674 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800140 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800155 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799905 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.799887525 +0000 UTC m=+534.492552243 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800202 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800189444 +0000 UTC m=+534.492854132 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800248 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800213594 +0000 UTC m=+534.492878263 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800268 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800258746 +0000 UTC m=+534.492923464 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800286 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800277976 +0000 UTC m=+534.492942674 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800304 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800295277 +0000 UTC m=+534.492959965 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800330 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800319788 +0000 UTC m=+534.492984486 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800348 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800339328 +0000 UTC m=+534.493004036 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800365 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800357329 +0000 UTC m=+534.493021997 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800383 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800374099 +0000 UTC m=+534.493038737 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800400 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.80039226 +0000 UTC m=+534.493056958 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800493 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800534 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800601 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800669 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800733 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800768 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800910 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800947 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801147 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801169 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801207 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801195152 +0000 UTC m=+534.493859860 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801243 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801288 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801277095 +0000 UTC m=+534.493941823 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801318 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801356 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801345647 +0000 UTC m=+534.494010345 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801388 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801457 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801491 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801479681 +0000 UTC m=+534.494144389 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.801556 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801571 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.801593 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801616 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801603534 +0000 UTC m=+534.494268252 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801647 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801687 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801689 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801676936 +0000 UTC m=+534.494341734 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801729 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801736 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801725298 +0000 UTC m=+534.494389956 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801766 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801755578 +0000 UTC m=+534.494420346 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801876 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801860561 +0000 UTC m=+534.494525259 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801900 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801931 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801945 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801933534 +0000 UTC m=+534.494598222 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801972 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801961254 +0000 UTC m=+534.494626082 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.801649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.802018 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.802054 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.802387 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.802427 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.802416117 +0000 UTC m=+534.495080945 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.802475 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.802512 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.80250155 +0000 UTC m=+534.495166278 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.904579 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.904750 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.904892 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.905220 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.905150844 +0000 UTC m=+534.597815532 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.905557 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.905608 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.905635 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.905763 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.905890 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.905925 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.905944 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.905932 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.905921426 +0000 UTC m=+534.598586074 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.905998 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906067 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.906063 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906102 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906090431 +0000 UTC m=+534.598755109 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.906137 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906144 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906135682 +0000 UTC m=+534.598800340 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906152 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906161 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906155433 +0000 UTC m=+534.598820021 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906195 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906182614 +0000 UTC m=+534.598847312 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906203 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906246 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906234035 +0000 UTC m=+534.598898723 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906478 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906518 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906507363 +0000 UTC m=+534.599171991 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906767 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906996 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906983467 +0000 UTC m=+534.599648105 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.008591 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009063 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009217 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009334 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.008717 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009528 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009230 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009530 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.009505138 +0000 UTC m=+534.702169846 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009285 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009605 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.00958855 +0000 UTC m=+534.702253238 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009435 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009626 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.009615681 +0000 UTC m=+534.702280369 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009651 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.009641841 +0000 UTC m=+534.702306479 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009701 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009846 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009884 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009912 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009945 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009983 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010023 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010084 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010146 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010155 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010174 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010179 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010192 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010193 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010203 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010216 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010184647 +0000 UTC m=+534.702849355 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010235 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010225678 +0000 UTC m=+534.702890296 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010102 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010238 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010251 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010244589 +0000 UTC m=+534.702909177 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010117 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010295 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010307 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010278 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010265479 +0000 UTC m=+534.702930147 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010370 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010415 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010431 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010447 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010439954 +0000 UTC m=+534.703104652 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010497 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010505 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010511 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010493286 +0000 UTC m=+534.703157974 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010523 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010534 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010542 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010544 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010530167 +0000 UTC m=+534.703194835 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010607 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010611 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010629 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010669 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.01065774 +0000 UTC m=+534.703322438 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010699432 +0000 UTC m=+534.703364190 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010724 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010733 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010723102 +0000 UTC m=+534.703387870 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010758 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010749593 +0000 UTC m=+534.703414201 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010764 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010713 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010894 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010876997 +0000 UTC m=+534.703541765 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010876 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011056 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.011041971 +0000 UTC m=+534.703706649 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011182 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011212 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.011195706 +0000 UTC m=+534.703860394 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011302 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011375 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.01136069 +0000 UTC m=+534.704025358 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011400 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011509 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011527 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011561 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011574 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.011561056 +0000 UTC m=+534.704225784 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011713 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011764 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.011751922 +0000 UTC m=+534.704416630 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011654 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011935 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012021 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012021 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012067 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012068 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012073 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.01206059 +0000 UTC m=+534.704725298 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012117 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012107332 +0000 UTC m=+534.704771970 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012120 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012135 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012164 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012175 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012179 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012166653 +0000 UTC m=+534.704831381 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012203 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012196654 +0000 UTC m=+534.704861342 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012241 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012265 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012257676 +0000 UTC m=+534.704922294 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012273 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012304 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012323 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012328 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012322468 +0000 UTC m=+534.704987156 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012361 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012410 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012412 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012435 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012428921 +0000 UTC m=+534.705093539 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012473 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012519 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012506893 +0000 UTC m=+534.705171601 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012474 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012532 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012546 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012557 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012580 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012587 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012577695 +0000 UTC m=+534.705242303 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012627 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012614906 +0000 UTC m=+534.705279614 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012635 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012657 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012663 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012654407 +0000 UTC m=+534.705319185 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012696 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012888 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012900 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012939 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012942 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012933865 +0000 UTC m=+534.705598563 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012982 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013015 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013007107 +0000 UTC m=+534.705671805 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013049 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013088 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013096 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013083829 +0000 UTC m=+534.705748517 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013127 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013174 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013186 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013197 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013190313 +0000 UTC m=+534.705854921 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013251 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013262 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013270 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013287 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013301 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013291375 +0000 UTC m=+534.705955993 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013366 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013387 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013400 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013436 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013425259 +0000 UTC m=+534.706089977 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013467 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013529 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013562 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013555103 +0000 UTC m=+534.706219691 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013565 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013576 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013569893 +0000 UTC m=+534.706234491 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013606 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013625 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013637 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013649 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013660 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013675 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013668896 +0000 UTC m=+534.706333514 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013736 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013737 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013756 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013744408 +0000 UTC m=+534.706409076 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013764 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013898 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013913 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013993 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014005 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013928984 +0000 UTC m=+534.706593672 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014050 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014092 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014081038 +0000 UTC m=+534.706745716 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014140 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014010 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014174 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014257 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014263 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014280 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014318 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014305404 +0000 UTC m=+534.706970122 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014140 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014366 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014356976 +0000 UTC m=+534.707021654 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014367 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014478 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014497 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014614 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014633 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014645 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014516 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014557 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014725 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014739 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014621833 +0000 UTC m=+534.707286571 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014764 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014753227 +0000 UTC m=+534.707417895 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014887 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.0148704 +0000 UTC m=+534.707535088 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014938 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014924932 +0000 UTC m=+534.707589640 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014999 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015028 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015056 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015090 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015077086 +0000 UTC m=+534.707741804 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015126 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015144 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015168 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015156739 +0000 UTC m=+534.707821487 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015208 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015220 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015237 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015249 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015259 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015286 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015274592 +0000 UTC m=+534.707939330 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015321 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015334 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015351 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015363 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015402 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015391265 +0000 UTC m=+534.708055943 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015403 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015443 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015433346 +0000 UTC m=+534.708098064 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015447 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015469 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015458777 +0000 UTC m=+534.708123475 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015489 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015480428 +0000 UTC m=+534.708145116 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015363 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015512 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015581 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015629 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015662 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015724 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015989 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016035 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016025603 +0000 UTC m=+534.708690311 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016066 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016088 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016101 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016142 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016128956 +0000 UTC m=+534.708793674 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016141 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016168 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016179 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016185 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016214 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016203238 +0000 UTC m=+534.708867946 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016089 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016239 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016227249 +0000 UTC m=+534.708891987 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016240 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016262 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.01625286 +0000 UTC m=+534.708917548 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016263 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016321 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016339 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016354 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016340 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016324232 +0000 UTC m=+534.708988950 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016413 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016397694 +0000 UTC m=+534.709062362 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016411 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016472 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016458556 +0000 UTC m=+534.709123254 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.123741 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.123987 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.124249 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.124305 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.124287828 +0000 UTC m=+534.816952456 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.124379 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.12435796 +0000 UTC m=+534.817022668 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.124396 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.124658 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.124761 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.125120 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.125427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.124845 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125623 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125643 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.124922 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125225 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125866 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125909 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125939 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125959 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125968 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125509 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125692 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.125677157 +0000 UTC m=+534.818341855 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126033 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126022627 +0000 UTC m=+534.818687225 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126051 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126044608 +0000 UTC m=+534.818709196 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126067 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126061218 +0000 UTC m=+534.818725806 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126082 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126075259 +0000 UTC m=+534.818739857 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.125583 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126381 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126457 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126492 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126524 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126574 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126560763 +0000 UTC m=+534.819225471 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126576 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126622 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126609974 +0000 UTC m=+534.819274682 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126622 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126672 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126655685 +0000 UTC m=+534.819320383 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126708 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126760 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126904 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126947 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127057 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127096 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127150 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127135009 +0000 UTC m=+534.819799777 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127202 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127218 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127236 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127260 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127272 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127259452 +0000 UTC m=+534.819924070 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127304 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127308 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127347 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127363 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127375 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127382 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127407 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127408 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127399346 +0000 UTC m=+534.820063964 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127445 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127471 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127465278 +0000 UTC m=+534.820129976 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127523 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127555 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127565 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127553071 +0000 UTC m=+534.820217759 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127595 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127625 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127618283 +0000 UTC m=+534.820282971 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127746 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127869 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128162 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128184 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128194 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128203 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128163 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128221 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128235 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128237 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128245 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128407 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128425 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128433 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128489 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128506 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128516 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.128739 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128841 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.128826497 +0000 UTC m=+534.821491235 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128865 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.128858638 +0000 UTC m=+534.821523226 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128881 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.128873958 +0000 UTC m=+534.821538556 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128895 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.128888779 +0000 UTC m=+534.821553447 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128897 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128911 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.128904169 +0000 UTC m=+534.821568757 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128914 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128924 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128928 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.12892109 +0000 UTC m=+534.821585678 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128986 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.12893712 +0000 UTC m=+534.821601708 (durationBeforeRetry 1m4s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129043 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129070 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129098 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129127 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129169 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129202 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129221 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.129206928 +0000 UTC m=+534.821871676 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129245 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.129234969 +0000 UTC m=+534.821899657 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129251 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129259 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129265 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.129256089 +0000 UTC m=+534.821920757 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129175 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129285 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.12927832 +0000 UTC m=+534.821943018 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129313 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129322 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.129311801 +0000 UTC m=+534.821976529 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129361 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129374 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129377 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129383 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129413 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.129406084 +0000 UTC m=+534.822070782 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129479 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129549 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129591 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129706 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129915 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129933 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129942 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129970 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130003 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130015 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130024 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130042 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130050 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130042282 +0000 UTC m=+534.822706890 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130076 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130066492 +0000 UTC m=+534.822731190 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130094 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130086993 +0000 UTC m=+534.822751591 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130110 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130103334 +0000 UTC m=+534.822768012 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130113 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130116 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130140 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130154 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.130170 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130192 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130181116 +0000 UTC m=+534.822845794 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130208 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.130228 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130231 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130225077 +0000 UTC m=+534.822889695 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130125 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130256 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130281 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130275148 +0000 UTC m=+534.822939756 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130328 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130402 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130386452 +0000 UTC m=+534.823051220 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.208450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.208491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.208574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.208626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.208685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.208705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.208912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.209006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.209377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.209633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.209735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.210142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.210229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.231490 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.231622 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231671 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231707 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231725 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231888 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.231862193 +0000 UTC m=+534.924527101 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231918 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231941 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231985 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.231970316 +0000 UTC m=+534.924635074 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.232307 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.232506 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.232529 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.232537 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.232569 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.232559503 +0000 UTC m=+534.925224131 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.432911 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.433049 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.208944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.211049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.211121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.211234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.211483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.211504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.211598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.211662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.211889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.211940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.212086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.212311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.212544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.212754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.213018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.213579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.213738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.213883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.215051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.215080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.215152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.215981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.216551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.216605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.216623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.216714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.217006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.217297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.217472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.231951 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.243603 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.243668 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.243685 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.243706 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.243734 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:45Z","lastTransitionTime":"2025-08-13T19:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.250376 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.260567 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.270333 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.270440 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.270462 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.270491 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.270527 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:45Z","lastTransitionTime":"2025-08-13T19:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.274134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.288459 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.295272 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.295332 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.295396 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.295420 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.295448 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:45Z","lastTransitionTime":"2025-08-13T19:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.298981 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.311313 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.314382 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.315935 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.315968 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.315990 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.316017 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.316042 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:45Z","lastTransitionTime":"2025-08-13T19:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.331983 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.334511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.337573 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.337757 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.337969 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.338098 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.338277 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:45Z","lastTransitionTime":"2025-08-13T19:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.352708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.355406 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.355463 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.373092 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.391013 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.401894 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.409704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.427029 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.432224 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.432541 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.444272 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.460142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.484393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.502688 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.523451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.541857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.559654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.573174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.592130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.610392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.627480 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.648546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.669644 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.692235 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.711597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.728160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.749468 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.768486 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.787670 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.806698 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.823186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.840522 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.857940 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.876660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.897585 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.920332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.939978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.960026 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.976559 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.993377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.017355 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.041465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.064493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.084460 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.105455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.131559 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.145699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.161960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.182054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.200722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.208494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.208674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.209033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.209122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.209296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.209424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.209516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.216250 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.229922 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.248552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.269145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.284604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.301477 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.326096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.341728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.362654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.382502 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.399989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.418344 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.433903 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.434038 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.435722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.450695 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.470748 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.494002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.209472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.209262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.209596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.209708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.209927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.210049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.210135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.210383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.210601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.210956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.212177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.212237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.212245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212685 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.212767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.212997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.213021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.213502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.213759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.214032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.214151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.214221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.214330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.214390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.438614 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.438950 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.209259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.209951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.210061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.210221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.209890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.209933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.210378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.210538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.210494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.209919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.211104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.432377 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.432483 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.052715 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/0.log" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.054254 4183 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2" exitCode=1 Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.054482 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2"} Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.055617 4183 scope.go:117] "RemoveContainer" containerID="1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.080896 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.111828 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.130881 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.153137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.171905 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.188438 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.209135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.209302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.209475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.209676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.209934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.211026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.211134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.211423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.211947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.212162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.212768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.212995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.213184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.213293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213878 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.213526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.214631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216320 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.236337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.252160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.274387 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.293618 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.309933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.327415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.339067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.356718 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.382858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.414933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.435613 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.435738 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.443502 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.462735 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.491399 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.512191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.540731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.562040 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.578684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.602039 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.619953 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.647290 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.670030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.696913 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.724296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.764759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.804118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.838000 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.869325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.894078 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.934757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.985716 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.010727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.033229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.058686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.066295 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/0.log" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.066493 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2"} Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.108483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.136625 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.162404 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.201241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.210134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.210430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.210670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.210750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.212087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.212229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.212399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.212613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.214679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.215032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.215210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.215306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.215499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.215673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.252552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.316722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.382307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.404173 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.453266 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.453401 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.565462 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.609289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.677552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.774159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.831854 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.883166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.908097 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.926107 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.948769 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.969102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.001579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.020651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.048407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.079964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.111499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.137208 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.158627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.190165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.208755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209858 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210839 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.211252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.211627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.211272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.213060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.213133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.227696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.250311 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.278588 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.308426 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.334742 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.366728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.389535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.417989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.432934 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.433063 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.445747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.474443 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.503261 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.524507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.551055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.581610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.607166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.634153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.655177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.672956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.691144 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.708276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.725610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.744946 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.764432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.783451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.805557 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.832052 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.851701 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.877731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.898050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.920959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.941588 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.960968 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.979684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.998927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.015163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.032298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.060751 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.076447 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/0.log" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.080920 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561" exitCode=1 Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.081153 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561"} Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.083173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.084030 4183 scope.go:117] "RemoveContainer" containerID="07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.102342 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.121961 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.148307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.169374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.193019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.208180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.208341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.208629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.209239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.209289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.210536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.209890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.210743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.211018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.209975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.210266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.211197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.211552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.221683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.240657 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.258307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.276889 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.307707 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.336555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.363410 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.390102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.415228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.440188 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.440447 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.446705 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.470253 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.493737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.522771 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.553383 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.576758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.604001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.630113 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.654000 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.677756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.702363 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.722691 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.744739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.781291 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.803867 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.822987 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.841762 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.864875 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.894209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.918285 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.941567 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.962727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.989228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.019200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.088903 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/0.log" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.093708 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7"} Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.208948 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.209971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.210238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.210409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.210568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.210758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.211383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.211630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213427 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.213519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.215191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.215262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.215415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.215648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.215977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.216146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.216296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.419603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.433087 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.433565 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.441102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.466958 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.484898 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.507150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.531946 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.557615 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.574454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.595513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.620263 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.643765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.661541 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.679196 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.699891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.719095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.736644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.753246 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.779415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:51Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0813 19:51:51.514559 14994 handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:51:51.514564 14994 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:51:51.514573 14994 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:51.514581 14994 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:51:51.514588 14994 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:51.514589 14994 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:51.514598 14994 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:51.514645 14994 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:51:51.514663 14994 handler.go:217] Removed *v1.NetworkPolicy event handler 4\\\\nI0813 19:51:51.514672 14994 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:51.514741 14994 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:51.514881 14994 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:51:51.514901 14994 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.798894 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.813907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.829676 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.848644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.867138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.884452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.901600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.917929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.934615 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.957559 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.975018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.988551 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.003492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.019915 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.045142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.073909 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.109722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.157049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.191993 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.208069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.208272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.208464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.208598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.210139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.210300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.210490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.210651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.210912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.211099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.211282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.211430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.211619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.211849 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.231007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.269858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.316335 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.352863 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.395509 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.431520 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.431754 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.431947 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.469055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.513971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.554378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.591248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.628882 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.670279 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.670373 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.670410 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.670443 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.670463 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.677002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.708991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.749556 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.788439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.827708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.869341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.909182 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.051857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.073857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.091768 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.103548 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/1.log" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.104318 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/0.log" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.110637 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.111265 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7" exitCode=1 Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.111326 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7"} Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.111388 4183 scope.go:117] "RemoveContainer" containerID="07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.113452 4183 scope.go:117] "RemoveContainer" containerID="55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.114359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.128564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.150693 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.190205 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.208536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.208736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.208910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.211084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.211225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.211368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.211570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.212341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.214199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.214289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.212369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.214414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.212570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.214617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212916 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.213025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.229070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.269118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.309346 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.349660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.389738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.405084 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.428221 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.431603 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.431712 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.470619 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.509111 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.549149 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.603315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.648403 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.672427 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.672482 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.672497 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.672517 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.672538 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:55Z","lastTransitionTime":"2025-08-13T19:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.676602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.689090 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.694387 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.694458 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.694476 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.694498 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.694525 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:55Z","lastTransitionTime":"2025-08-13T19:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.710534 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.711687 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.715274 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.715343 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.715363 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.715384 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.715407 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:55Z","lastTransitionTime":"2025-08-13T19:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.729740 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.734139 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.734209 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.734225 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.734245 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.734267 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:55Z","lastTransitionTime":"2025-08-13T19:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.748461 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.749506 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.754295 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.754360 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.754376 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.754396 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.754428 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:55Z","lastTransitionTime":"2025-08-13T19:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.770551 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.770612 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.793354 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.830858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.870129 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.911955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.949662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.990308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.028434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.070402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.116354 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.118370 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/1.log" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.151098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.189539 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208847 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.209090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.209272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.209491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.209720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.209985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.210166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.210329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.227890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.269856 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.312263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.352152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.392237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.430765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.432892 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.432974 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.470358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.510332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.548723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.589165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.636142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:51Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0813 19:51:51.514559 14994 handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:51:51.514564 14994 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:51:51.514573 14994 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:51.514581 14994 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:51:51.514588 14994 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:51.514589 14994 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:51.514598 14994 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:51.514645 14994 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:51:51.514663 14994 handler.go:217] Removed *v1.NetworkPolicy event handler 4\\\\nI0813 19:51:51.514672 14994 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:51.514741 14994 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:51.514881 14994 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:51:51.514901 14994 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"3 16242 handler.go:203] Sending *v1.Node event handler 7 for removal\\\\nI0813 19:51:54.589848 16242 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:54.589868 16242 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:54.589895 16242 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:54.589924 16242 services_controller.go:231] Shutting down controller ovn-lb-controller\\\\nI0813 19:51:54.589937 16242 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:51:54.589952 16242 handler.go:203] Sending *v1.Node event handler 10 for removal\\\\nI0813 19:51:54.589975 16242 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:54.589985 16242 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:54.589996 16242 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:51:54.590680 16242 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:54.591579 16242 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.675128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.710190 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.750476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.787998 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.833890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.870929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.910554 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.950076 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.989745 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.039128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.067853 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.108434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.148481 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.191345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.210356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.210466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.210712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.210959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.211267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.211439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.211611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.211729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.212509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.212673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212902 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.212876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.212990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.214009 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214291 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.214402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.214618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.214680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215888 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.216091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.216257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.216348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.216425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.216505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.234166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.269748 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.309703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.351314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.393367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.430134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.433363 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.433466 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.473916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.512328 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.552934 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.591686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.631762 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.671139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.715296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.748927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.791380 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.828504 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.867666 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.910258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.952209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.990313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.029597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.071537 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.110306 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.151115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.189829 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.208480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.208543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.208610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.208723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.209082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.209363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.209412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.232644 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.233686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.269432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.309528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.348194 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.391381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.432032 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.435206 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.435332 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.470307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.510678 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.546335 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.589101 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.629663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.672130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.711319 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.750960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.795613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.828649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.875345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.915763 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.952986 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.991605 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.028182 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.068754 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.108430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.151392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.190051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208837 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208856 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.208978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209980 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211291 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.212149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.212669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.212768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.214475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.214700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.215134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.234294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:51Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0813 19:51:51.514559 14994 handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:51:51.514564 14994 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:51:51.514573 14994 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:51.514581 14994 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:51:51.514588 14994 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:51.514589 14994 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:51.514598 14994 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:51.514645 14994 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:51:51.514663 14994 handler.go:217] Removed *v1.NetworkPolicy event handler 4\\\\nI0813 19:51:51.514672 14994 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:51.514741 14994 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:51.514881 14994 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:51:51.514901 14994 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"3 16242 handler.go:203] Sending *v1.Node event handler 7 for removal\\\\nI0813 19:51:54.589848 16242 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:54.589868 16242 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:54.589895 16242 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:54.589924 16242 services_controller.go:231] Shutting down controller ovn-lb-controller\\\\nI0813 19:51:54.589937 16242 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:51:54.589952 16242 handler.go:203] Sending *v1.Node event handler 10 for removal\\\\nI0813 19:51:54.589975 16242 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:54.589985 16242 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:54.589996 16242 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:51:54.590680 16242 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:54.591579 16242 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.274943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.311017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.348750 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.389389 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.432355 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.434100 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.434174 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.470411 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.514230 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.548702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.592005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.637684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.671538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.709341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.747923 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.792326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.830866 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.869021 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.911656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.949686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.989656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.029718 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.069555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.113996 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.161651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.193280 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.208472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.208610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.208720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.208911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.208977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.209068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.209078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.209166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.209223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.208480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.209298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.209344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.209407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.209705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.232660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.407592 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.437431 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.437677 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.169739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.186957 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.203493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.208761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.209032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.209228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.209326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.209510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.209613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.209689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.209887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.209978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210843 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210870 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.211106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.211176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.211382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.211758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.211914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.212191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.212610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.212682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.213297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.213331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.215041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.215084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.215100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.222402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.238596 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.254125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.433438 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.433573 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.209922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.209994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.210099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.210180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.210381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.210590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.210612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.210836 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.433352 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.433474 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.209443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.209980 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.211053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.211213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.211387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.211518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.211653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.212637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.212888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.212930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.214247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.214651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.214625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.214763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.215097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.215147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.215297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.216128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.216257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.433926 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.434116 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.208642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.208911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.209074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.209153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.209268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.209347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.209565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.209759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.209963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.210037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.210132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.210201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.433388 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.433530 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.210273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.210491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.210886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.211118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.211524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.213739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.213962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.213008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.213094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.213135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.213214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.213344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.213354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.213610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.215277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.215485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.215636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.215907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.216190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.216347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.216450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.216548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.241622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.269937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.291147 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.310586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.326667 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.346252 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.363210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.381545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.400245 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.411345 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.417429 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.433748 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.433937 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.434654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.455425 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.472571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.494716 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.511286 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.536543 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.556634 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.573265 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.588031 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.605925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.622428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.639133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.654938 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.676539 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.701538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.719491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.738991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.757872 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.773712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.790734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.812314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.828025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.841756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.857351 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.883484 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:51Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0813 19:51:51.514559 14994 handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:51:51.514564 14994 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:51:51.514573 14994 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:51.514581 14994 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:51:51.514588 14994 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:51.514589 14994 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:51.514598 14994 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:51.514645 14994 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:51:51.514663 14994 handler.go:217] Removed *v1.NetworkPolicy event handler 4\\\\nI0813 19:51:51.514672 14994 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:51.514741 14994 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:51.514881 14994 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:51:51.514901 14994 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"3 16242 handler.go:203] Sending *v1.Node event handler 7 for removal\\\\nI0813 19:51:54.589848 16242 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:54.589868 16242 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:54.589895 16242 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:54.589924 16242 services_controller.go:231] Shutting down controller ovn-lb-controller\\\\nI0813 19:51:54.589937 16242 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:51:54.589952 16242 handler.go:203] Sending *v1.Node event handler 10 for removal\\\\nI0813 19:51:54.589975 16242 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:54.589985 16242 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:54.589996 16242 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:51:54.590680 16242 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:54.591579 16242 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.935102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.944437 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.944942 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.945077 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.945250 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.945384 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:05Z","lastTransitionTime":"2025-08-13T19:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.959048 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.977053 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.983836 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.984156 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.984287 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.984379 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.984545 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:05Z","lastTransitionTime":"2025-08-13T19:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.987475 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.009425 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.015105 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.015215 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.015231 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.015267 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.015291 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:06Z","lastTransitionTime":"2025-08-13T19:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.020379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.028933 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.033686 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.033718 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.033732 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.033751 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.033858 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:06Z","lastTransitionTime":"2025-08-13T19:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.038611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.049417 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.052929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.054481 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.054542 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.054565 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.054592 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.054617 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:06Z","lastTransitionTime":"2025-08-13T19:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.068530 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.070432 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.070487 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.085959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.099899 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.119378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.148905 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.169759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.185748 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.200450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.208300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.208451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.208489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.208490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.208561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.208679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.208870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.208937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.209001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.209116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.209298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.210715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.216865 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.237627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.253441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.269458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.289357 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.305318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.319150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.343453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.362951 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.382658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.401025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.416378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.431650 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.433563 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.433682 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.449299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.464728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.481490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.496761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.511219 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.209486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.209655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.209886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.210030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.210337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.210489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.210634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210844 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.210918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.211354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.211507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211843 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.211176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.211661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.209197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.212264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.212360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.212504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.212669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.212876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.213014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.213130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.213359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.213675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.215333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.215472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.215636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.215989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.216382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.216497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.216646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.216747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216888 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.216436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.217088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.217343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.432160 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.432324 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.209894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.210028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.210055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.210135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.210139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.210185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.210297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.432478 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.432589 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.208632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.208691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.208748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208864 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.208941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.210082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.210109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.210298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.210688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.211017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.210980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.211123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.211159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.211762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.211947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.214453 4183 scope.go:117] "RemoveContainer" containerID="55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.235903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.253683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.268602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.283004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.305636 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.320032 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.348304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.365573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.386142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.404931 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.425928 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.433930 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.434051 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.450073 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.466041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.484876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.509271 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.533459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.551080 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.569356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.585374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.610325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.635148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.655616 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.677546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.693348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.717671 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.733954 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.759086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.792389 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"3 16242 handler.go:203] Sending *v1.Node event handler 7 for removal\\\\nI0813 19:51:54.589848 16242 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:54.589868 16242 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:54.589895 16242 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:54.589924 16242 services_controller.go:231] Shutting down controller ovn-lb-controller\\\\nI0813 19:51:54.589937 16242 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:51:54.589952 16242 handler.go:203] Sending *v1.Node event handler 10 for removal\\\\nI0813 19:51:54.589975 16242 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:54.589985 16242 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:54.589996 16242 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:51:54.590680 16242 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:54.591579 16242 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.812763 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.837635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.855295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.870753 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.892653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.909739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.925691 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.941728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.955310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.193066 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/1.log" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.198695 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa"} Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.199424 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.208846 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.209051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.208765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.208917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.209421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.209532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.208950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.210147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.211014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.211200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.211375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.211544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.384450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.413605 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.415973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.434354 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.434495 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.435322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.463767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.490287 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.513656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.531393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.559318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.576538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.595912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.612337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.630337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.650673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.671237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.691461 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.711148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.729313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.745359 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.762311 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.780161 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.800473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.815505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.838247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.855675 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.873421 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.890107 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.910909 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.930653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.947686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.964867 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.980401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.997023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.012398 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.038504 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.058439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.074053 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.092033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.110944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.130460 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.146314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.164420 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.181987 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.199167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.206704 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/2.log" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.207918 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/1.log" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.208759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208834 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.208981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.209178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.209542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.209659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.209854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.210007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.210505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.210891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.211094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.211345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.211459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.211652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.212658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.213019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.213197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.213369 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.220688 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" exitCode=1 Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.220724 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa"} Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.220755 4183 scope.go:117] "RemoveContainer" containerID="55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.222944 4183 scope.go:117] "RemoveContainer" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.223746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.224423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.239353 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.260542 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.285055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.301059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.317102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.333940 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.349865 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.367731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.383535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.399153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.413553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.427553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.432442 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.432594 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.444611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.459377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.476690 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.490626 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.505916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.529938 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.547010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.564254 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.579887 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.594465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.608076 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.621930 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.634400 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.645167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.659323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.672023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.687119 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.702248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.719733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.742523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.785341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.826745 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.865540 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.904915 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.944458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.987002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.023663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.062628 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.104674 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.162176 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"3 16242 handler.go:203] Sending *v1.Node event handler 7 for removal\\\\nI0813 19:51:54.589848 16242 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:54.589868 16242 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:54.589895 16242 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:54.589924 16242 services_controller.go:231] Shutting down controller ovn-lb-controller\\\\nI0813 19:51:54.589937 16242 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:51:54.589952 16242 handler.go:203] Sending *v1.Node event handler 10 for removal\\\\nI0813 19:51:54.589975 16242 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:54.589985 16242 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:54.589996 16242 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:51:54.590680 16242 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:54.591579 16242 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.208750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.209273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.220029 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.225077 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/2.log" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.231205 4183 scope.go:117] "RemoveContainer" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.231753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.243493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.263920 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.303742 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.343108 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.383571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.425073 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.432973 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.433311 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.464596 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.506033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.550945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.584192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.625323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.664291 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.702715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.742888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.784249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.826128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.866561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.907159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.943165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.984025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.034256 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.068620 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.110215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.144326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.186159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.208501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.208718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.208737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.208922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.210021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.210051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.210118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.212175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.212267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.213563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.213696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.213874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.213964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.214040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.214165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.214245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.214511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.214675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.215080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.215210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.215412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.215670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.215962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.216157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.216284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.216375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.228249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.267215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.318565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.352450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.385065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.424902 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.434553 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.434632 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.467043 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.506485 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.548500 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.586552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.624309 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.666883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.706643 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.748034 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.788673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.825458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.864231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.912549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.942434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.987846 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.025168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.066442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.106490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.147068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.194562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.208727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.208967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.209112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.209116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.209176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.209386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.209178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.210471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.224109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.271373 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.311935 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.345095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.390624 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.424116 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.432526 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.432615 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.464420 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.503521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.545507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.592057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.626276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.663963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.703505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.744672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.783516 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.824472 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.864064 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.905381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.948008 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.992917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.038513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.064933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.104579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.145413 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.198465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.209647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.210063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.210362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.210714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210838 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.210922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.211023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.211159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.211291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.211312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.211400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.211492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.211636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.211703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.211754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.212479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.212667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.212940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.213096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.213665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.213557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.213841 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.214312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.213766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.213906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.213935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.213952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.213973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.214695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.214765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.215121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.215379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215513 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.216482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.217059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.217105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.217158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.217219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.227402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.268129 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.309977 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.349363 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.390655 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.415404 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.429276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.431279 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.431351 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.466036 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.507253 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.544295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.590323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.624546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.665430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.703927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.746173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.790655 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.824177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.865302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.905082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.946521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.992266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.024555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.067531 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.107490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.146206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.190060 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.209295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.209617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.209843 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.210399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.210627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.210709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.210851 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.210997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.211047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.211172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.211226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.211362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.211400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.211472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.231890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.273999 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.305629 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.346866 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.386216 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.414086 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.414183 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.414204 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.414229 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.414260 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:16Z","lastTransitionTime":"2025-08-13T19:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.422942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.429501 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.433761 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.434145 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.437038 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.437076 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.437088 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.437109 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.437136 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:16Z","lastTransitionTime":"2025-08-13T19:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.454608 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.459745 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.460019 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.460041 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.460061 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.460107 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:16Z","lastTransitionTime":"2025-08-13T19:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.466764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.477042 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.482659 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.482889 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.483021 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.483137 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.483254 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:16Z","lastTransitionTime":"2025-08-13T19:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.497658 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.502267 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.502326 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.502343 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.502363 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.502392 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:16Z","lastTransitionTime":"2025-08-13T19:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.510712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.517856 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.517912 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.545277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.584994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.624376 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.665034 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.704513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.744732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.787206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.824978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.865716 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.906553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.944654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.984642 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.026637 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.071659 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.110878 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.154617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.193322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.210042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210772 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.210500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.211016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.211060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.211099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.211499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.211986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.212301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.212431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.212596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.212711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.212921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.213070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.213274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.213289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.213383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.213487 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.213574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.213673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.213693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.213931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.213993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.214143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.214479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.214682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214875 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.215060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.214325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.215290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.215388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.215544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.215561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.216078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.216128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.216354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.216357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.216433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.216524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.216690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.216978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.217152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.217273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.217433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.217524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.217651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.217727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.217989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.218099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.218240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.218345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.218481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.218608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.219307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.219481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.219942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.231440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.269561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.310566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.350303 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.386924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.428004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.432319 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.432423 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.466660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.505851 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.548122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.586931 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.631724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.668022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.707047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.747098 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.788624 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.826323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.873994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.908004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.950001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.209742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.210171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.210282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.210419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.210583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.210889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.211124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.432039 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.432145 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.209741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.212034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.212395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.212456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.212475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.212546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.212584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.212594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.212643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.212687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.212735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.213152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.213210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213853 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.213881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.214184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.214368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.214888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.215016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.215130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.215324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.215353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.433021 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.433099 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.208339 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.208599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.208688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.208959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.209098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.209358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.209479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.416675 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.432598 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.432692 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.209672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.209766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.209958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.210026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.210431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.210554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.210688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.210944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.212103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.212229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.212367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.213097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213438 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213531 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.214039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.214179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.214289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.431557 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.431667 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.209413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.209978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.210255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.210294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.210528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.210907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.211204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.432638 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.433195 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209896 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210391 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213427 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.432541 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.432657 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.208891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.208907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.209179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.209219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.209441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.209695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.209763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.210113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.210254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.210625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.432433 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.432563 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.208585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.208719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.208913 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.210215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.210382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.210476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.210522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.210577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.210688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.210749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.210942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.210994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.211055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.211150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.211224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.211368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.211445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.211587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.211717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.212092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.212242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.214252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.216566 4183 scope.go:117] "RemoveContainer" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.217610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.226358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.242549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.299749 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.341904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.359156 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.375407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.390704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.409386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.417898 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.426634 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.431590 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.431688 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.444429 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.461537 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.478427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.503501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.518568 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.532860 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.546481 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.564679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.581764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.595706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.611643 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.627545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.640945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.653988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.669505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.684758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.703379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.720294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.738580 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.756591 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.771551 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.785974 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.802235 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.820982 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.844721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.861896 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.879449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.901580 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.919540 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.935967 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.952310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.970757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.987613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.005661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.040874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.056947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.071417 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.087879 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.100908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.117528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.137686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.152342 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.169756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.184095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.200458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.208193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.208602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.209079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.209086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.209195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.209271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.209587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.218174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.238245 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.258348 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.278322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.295976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.312292 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.329156 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.349715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.369468 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.389175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.458509 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.462533 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.462605 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.475948 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.489750 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.678978 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.679077 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.679100 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.679127 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.679154 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:26Z","lastTransitionTime":"2025-08-13T19:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.695941 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.701423 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.701486 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.701503 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.701549 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.701580 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:26Z","lastTransitionTime":"2025-08-13T19:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.714964 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.720245 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.720524 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.720668 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.720902 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.721022 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:26Z","lastTransitionTime":"2025-08-13T19:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.742042 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.748221 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.748300 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.748325 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.748354 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.748382 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:26Z","lastTransitionTime":"2025-08-13T19:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.765415 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.772596 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.772711 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.772753 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.773066 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.773111 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:26Z","lastTransitionTime":"2025-08-13T19:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.798253 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.798717 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.208401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.208754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209754 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.210037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.210399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.210406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.210520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.210666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.212012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.213468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.215265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.215478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.215682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.216468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.432599 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.432681 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.208615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.209006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.209077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.208723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.208760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.208766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.208875 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.209576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.209754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.210062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.210128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.210259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.210355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.210577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.432746 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.432951 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.208710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.208977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.209127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.209225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.209412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.209610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.209864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.211056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.211202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.211318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.211467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.211519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.211595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.211650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.211857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.212935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.213087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.213271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.213445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.213453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.213515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.213539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.213663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.214073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.214194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.214396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.214598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.214935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.215057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.215190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.215389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.215585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.215715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.215891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.215961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.216197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.216303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.216392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.216507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.216593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.216730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.216761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.217649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.218064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.218197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.218350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.218472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.218705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.219115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.432152 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.432228 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.209563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.209953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.210293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.210179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.210504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.210650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.211031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.419375 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.432321 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.432414 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.208484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.208678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.208980 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208960 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.210051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.210342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.210617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.211155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.211249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.211558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.211573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.211869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212834 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.212849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.213172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.213185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.213705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.432358 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.432514 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.210957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.211220 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.432208 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.432295 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.209599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.210903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.211390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.211523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.211606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.211686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.212183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.211154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.212268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.212380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.212635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.212788 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.212640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.213111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.213116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.213341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.213532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.213535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.213700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.213932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216873 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.217256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217777 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.432982 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.433080 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.210057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.432071 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.432196 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208487 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.208581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.210231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.210336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210365 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.210465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.210555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.210698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.211062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.211207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.211212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.211244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.211409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.211612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.213195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.213445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.213476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.213499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.213576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.214322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.214401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.214708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.230689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.253852 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.272341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.294992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.316486 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.345283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.367209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.386589 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.406740 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.422091 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.434961 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.435098 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.455720 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.484301 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.504250 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.526163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.545954 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.561206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.582883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.601440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.619163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.636193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.655635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.673654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.697355 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.721909 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.742057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.764238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.786316 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.808679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.834060 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.851181 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.869679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.889315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.912331 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.939384 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.962990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.982439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.011493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.031604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.049744 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.068270 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.088919 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.105469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.119708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.137208 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.163378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.178411 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.193207 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208835 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.208928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.209029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.209155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.209274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.209496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.209619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.210040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.211242 4183 scope.go:117] "RemoveContainer" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.237267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.285341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.319504 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.339747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.361300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.383338 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.402186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.420719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.432356 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.432490 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.440084 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.464341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.497535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.523495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.545963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.562734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.584371 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.612921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.638333 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.658049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.674476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.693032 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.709945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.176344 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.176444 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.176468 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.176499 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.176536 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:37Z","lastTransitionTime":"2025-08-13T19:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.196779 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.205346 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.205583 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.205616 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.205644 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.205894 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:37Z","lastTransitionTime":"2025-08-13T19:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.208629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.208716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208777 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.208956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.211021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.211038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.211040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.211557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.213106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.213223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.230112 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.236917 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.236998 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.237030 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.237061 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.237097 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:37Z","lastTransitionTime":"2025-08-13T19:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.256130 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.266169 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.266257 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.266285 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.266318 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.266363 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:37Z","lastTransitionTime":"2025-08-13T19:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.292768 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.303859 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.303900 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.303913 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.303933 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.303961 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:37Z","lastTransitionTime":"2025-08-13T19:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.324874 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.324934 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.337735 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/2.log" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.342713 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf"} Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.343674 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.363228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.386204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.408880 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.433285 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.433401 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.444023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.468489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.487294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.513565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.539382 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.563409 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.587610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.604960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.624506 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.646496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.665473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.683084 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.701585 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.718976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.737227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.754727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.775330 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.794554 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.810987 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.831508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.846189 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.862428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.879920 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.895570 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.911611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.926516 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.942401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.960427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.979386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.998986 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.016299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.033333 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.050437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.070426 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.090225 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.110010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.127127 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.149472 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.162708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.180225 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.200313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.208600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.208860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.208952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.209069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.209204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.209251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.209473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.223468 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.242190 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.265950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.286164 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.303585 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.321033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.336490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.348697 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/1.log" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.349385 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/0.log" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.349487 4183 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2" exitCode=1 Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.349571 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2"} Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.349612 4183 scope.go:117] "RemoveContainer" containerID="1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.350361 4183 scope.go:117] "RemoveContainer" containerID="9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.350946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.360041 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/3.log" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.363171 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/2.log" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.369945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.370756 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" exitCode=1 Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.370889 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf"} Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.372999 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.375539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.396916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.416534 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.434054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.435514 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.435584 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.444347 4183 scope.go:117] "RemoveContainer" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.457290 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.474354 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.489300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.504978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.520207 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.534614 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.552553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.567979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.582668 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.599143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.619122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.635775 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.652259 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.667767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.706336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.742013 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.781196 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.821572 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.863521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.914224 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.939955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.980985 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.021846 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.064911 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.103720 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.142686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.182336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.209476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.210153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.210442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.210542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.210669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.210943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.213086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.213768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.214683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.215033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.215163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.215599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.215758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.225661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.260868 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.300564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.343139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.376348 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/3.log" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.382404 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/1.log" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.383272 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.383985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.387456 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.422137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.432161 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.432261 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.462661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.502527 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.542061 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.583749 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.623154 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.664280 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.702579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.742490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.784263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.819964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.862032 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.902937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.944042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.984281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.023729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.062937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.101556 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.141470 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.182193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.211402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.211540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.211678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211862 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.211940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.211956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.212009 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.212169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.212255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.222570 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.271469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.303912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.340687 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.382498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.420310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.423025 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.432362 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.432457 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.467180 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.499089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.541457 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.580393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.622895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.663042 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.701558 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.741302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.782293 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.824763 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.863089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.903084 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.944534 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.982268 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.027964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.061227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.101132 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.143663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.209946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.209978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.212008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.212146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.212219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.212211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.214214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.215040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.219652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.241419 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.260555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.300637 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.343492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.382770 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.421583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.433422 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.433589 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.466744 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.501232 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.540895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.582045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.622079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.669053 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.701597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.741217 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.785021 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.829952 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.862883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.900544 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.940912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.982482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.032242 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.075143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.107989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.145901 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.183904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.208642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.208701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.208904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.208943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.209171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.209312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209847 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.223855 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.265003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.304026 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.342508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.384622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.421764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.433003 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.433136 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.467297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.500714 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.543349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.581567 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.620907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.660877 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.702673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.741913 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.786140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.821018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.862122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.901428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.940972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.980905 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.024003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.062070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.104511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.142213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.178631 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.210006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208960 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209039 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211963 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.220345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.220539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.220947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.221533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.221942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.222457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.222603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.222986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.223210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.223451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.224156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.224272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.224623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.228269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.234944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.262286 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.307238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.340870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.386695 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.423363 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.432499 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.432576 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.432620 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.433737 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839"} pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" containerMessage="Container router failed startup probe, will be restarted" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.433910 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" containerID="cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839" gracePeriod=3600 Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.471203 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.504175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.542438 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.583471 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.627254 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.663307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.702337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.741944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.784352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.821666 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.862193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.901605 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.947467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.208749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.208896 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.209083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.209141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.209213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.209257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.209370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.209497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.209550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.210067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.210372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.210228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208454 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.208534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.208683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.208944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209009 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209837 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.211168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.211232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.211336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.212687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.212734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.212842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.212890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.212970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.213007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.213138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.213277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.213383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.213609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.228109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.243759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.257682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.274879 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.299505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.315998 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.333233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.349437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.369088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.386205 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.402717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.418604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.424749 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.435063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.450501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.466728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.484494 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.506103 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.526745 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.544121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.560293 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.576479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.592342 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.606424 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.623898 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.640028 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.656033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.672996 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.689672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.707571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.727997 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.744728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.764464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.781356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.798692 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.813233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.829522 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.847609 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.871681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.891981 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.909756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.926926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.940339 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.960178 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.979178 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.997160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.014919 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.030926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.056042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.070508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.085050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.100600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.120101 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.136747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.151555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.167132 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.184219 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.209511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.209903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.209925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.210043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.210094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.210193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.210278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.221974 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.262428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.303976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.341939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.384296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.426470 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.463005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.506418 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.541329 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.578547 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.621934 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.208941 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210444 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211855 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.212202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.212369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.212720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.213288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.213330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.213420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.213765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214320 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214832 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.215558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.512281 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.512623 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.512754 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.512968 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.513100 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:47Z","lastTransitionTime":"2025-08-13T19:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.529050 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.535748 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.535889 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.535910 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.535934 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.535958 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:47Z","lastTransitionTime":"2025-08-13T19:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.553158 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.558619 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.558668 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.558683 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.558704 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.558724 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:47Z","lastTransitionTime":"2025-08-13T19:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.574415 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.579446 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.579539 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.579561 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.579588 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.579612 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:47Z","lastTransitionTime":"2025-08-13T19:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.594950 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.601542 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.601662 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.601683 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.601706 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.601734 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:47Z","lastTransitionTime":"2025-08-13T19:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.617075 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.617146 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833413 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833517 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833634 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833667 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833703 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833871 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833909 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833949 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834123 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834169 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834210 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.834467 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834528 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.834564 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.834546662 +0000 UTC m=+656.527211290 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834595 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834632 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834692 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834751 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835047 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835108 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835096868 +0000 UTC m=+656.527761486 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835161 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835190 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.83517882 +0000 UTC m=+656.527843438 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835238 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835268 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835260362 +0000 UTC m=+656.527924980 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835316 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835346 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835334735 +0000 UTC m=+656.527999353 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835367 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835396 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835418 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835407137 +0000 UTC m=+656.528071765 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835433 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835425587 +0000 UTC m=+656.528090205 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835480 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835498 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835509 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835498629 +0000 UTC m=+656.528163327 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835519 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835532 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835559 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835567 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835556891 +0000 UTC m=+656.528221509 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835589 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835581162 +0000 UTC m=+656.528245780 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835613 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835635 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835671 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835634823 +0000 UTC m=+656.528299491 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835688 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835680264 +0000 UTC m=+656.528344862 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835697 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835724 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835716745 +0000 UTC m=+656.528381363 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835879 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835922 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835906401 +0000 UTC m=+656.528571019 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835977 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.836005 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835997893 +0000 UTC m=+656.528662511 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.836177 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.836192 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.836228 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.83621969 +0000 UTC m=+656.528884308 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.839663 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.839745 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.839998 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840018 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840108 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840128 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.84010544 +0000 UTC m=+656.532770178 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840160 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.840148222 +0000 UTC m=+656.532813000 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840202 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840234 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.840226214 +0000 UTC m=+656.532890892 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.840036 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840292 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840363 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.840347067 +0000 UTC m=+656.533011775 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.841454 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.845391 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.84537118 +0000 UTC m=+656.538035978 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.841251 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.845860 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.845907 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.845966 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.845982 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.846010 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846067 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.846023979 +0000 UTC m=+656.538688597 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846112 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.846128 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846168 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.846153432 +0000 UTC m=+656.538818150 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846189 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846247 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.846228075 +0000 UTC m=+656.538892803 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846404 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.846388059 +0000 UTC m=+656.539052777 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948020 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948087 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948114 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948247 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948362 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948367 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948341411 +0000 UTC m=+656.641006159 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948448 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948432933 +0000 UTC m=+656.641097531 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948540 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948586 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948271 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948619 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948637 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948629239 +0000 UTC m=+656.641293857 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948682 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948689 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948713 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948706071 +0000 UTC m=+656.641370689 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948748 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948877 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948765113 +0000 UTC m=+656.641429721 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948917 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948955 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948946708 +0000 UTC m=+656.641611446 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948976 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.949008 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.949001269 +0000 UTC m=+656.641665887 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.949714 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.949904 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.949914 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.949960 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.949947996 +0000 UTC m=+656.642612724 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.950001 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.950039 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.950030879 +0000 UTC m=+656.642695497 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051168 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051252 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051282 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051321 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051347 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051355 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051467 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.051441145 +0000 UTC m=+656.744105753 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051471 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051380 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051531 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.051516157 +0000 UTC m=+656.744180885 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051555 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051583 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051658 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051683 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051695 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051717 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.051722343 +0000 UTC m=+656.744387081 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051760 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051955 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052166 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052227 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052213467 +0000 UTC m=+656.744878215 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052226 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052265 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052252658 +0000 UTC m=+656.744917246 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052297 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052282939 +0000 UTC m=+656.744947677 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052167 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052326 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052328 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052340 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052347 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052353 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052359 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052387 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052378722 +0000 UTC m=+656.745043330 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052407 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052400002 +0000 UTC m=+656.745064590 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052427 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052462 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052468 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052455724 +0000 UTC m=+656.745120432 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052534 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052518746 +0000 UTC m=+656.745183424 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052553 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052544236 +0000 UTC m=+656.745208884 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052310 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052594 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052606 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052642 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052631639 +0000 UTC m=+656.745296357 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052246 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052672 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052684 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052721 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052709461 +0000 UTC m=+656.745374169 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.052177 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.052871 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.052939 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053009 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053020 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053051 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05304065 +0000 UTC m=+656.745705268 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053090 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053131 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053119443 +0000 UTC m=+656.745784151 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053132 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053155 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053168 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053169 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053190 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053199 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053202 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053192045 +0000 UTC m=+656.745856853 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053092 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053233 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053222586 +0000 UTC m=+656.745887334 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053298 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053346 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053373 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053463 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053470 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053487 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053495 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053517 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053508314 +0000 UTC m=+656.746173112 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053543 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053552 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053564 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053573 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053580 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053605 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053624 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053637 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053641708 +0000 UTC m=+656.746306316 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053675 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053690 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053717 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053710459 +0000 UTC m=+656.746375268 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053733 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05372581 +0000 UTC m=+656.746390518 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053692 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053858 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053886 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053899 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053943 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053901125 +0000 UTC m=+656.746566023 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053953 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053971 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053972 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053961097 +0000 UTC m=+656.746625915 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054003 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053991417 +0000 UTC m=+656.746656106 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054011 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054031 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054020818 +0000 UTC m=+656.746685486 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053863 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054064 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054053 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054041739 +0000 UTC m=+656.746706537 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054084 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054097 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054112 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054192 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054195 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054224 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054205694 +0000 UTC m=+656.746870362 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054254 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054267 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054275 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054296 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054307 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054298946 +0000 UTC m=+656.746963694 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054330 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054317477 +0000 UTC m=+656.746982195 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054363 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054384 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054403 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054410 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054415 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054459 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05444543 +0000 UTC m=+656.747110118 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054492 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054505 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054513 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054544 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054537283 +0000 UTC m=+656.747202021 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054652 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054693 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054684927 +0000 UTC m=+656.747349545 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054732 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054750 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054858 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054870 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054897 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054882423 +0000 UTC m=+656.747547121 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054926 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054941 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054958 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054950025 +0000 UTC m=+656.747614643 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054989 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054991 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055025 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055017657 +0000 UTC m=+656.747682255 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055032 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055052 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055080 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055074498 +0000 UTC m=+656.747739086 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055085 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055105 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055124 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055127 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05512119 +0000 UTC m=+656.747785808 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055155 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055163 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055152041 +0000 UTC m=+656.747816729 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055191 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055195 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055241 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055259 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055249283 +0000 UTC m=+656.747913961 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055242 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055280 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055271414 +0000 UTC m=+656.747936072 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055301 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055328 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055352 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055374 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055393 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055399 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055404 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055413 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055439 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055428488 +0000 UTC m=+656.748093176 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055459 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055471 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055479 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055460 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055450469 +0000 UTC m=+656.748115117 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055512 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055505261 +0000 UTC m=+656.748169849 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055523 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055542 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055553 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055527 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055519611 +0000 UTC m=+656.748184199 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055605 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055592043 +0000 UTC m=+656.748256721 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055355 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055699 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055749 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055975 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.056102 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.056282 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.056298 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.056306 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057081 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057106 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057091736 +0000 UTC m=+656.749756354 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057135 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057129047 +0000 UTC m=+656.749793645 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057151 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057144567 +0000 UTC m=+656.749809165 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057169 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057179 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057188 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057209 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057228 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057215689 +0000 UTC m=+656.749880387 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057249 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057260 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057276 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057269571 +0000 UTC m=+656.749934289 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057295 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057314 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057336 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057351 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057341133 +0000 UTC m=+656.750005831 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057377 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057383 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057400 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057394344 +0000 UTC m=+656.750058952 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057450 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057469 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057472 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057465906 +0000 UTC m=+656.750130514 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057505 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057511 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057527 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057521288 +0000 UTC m=+656.750185896 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057547 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057571 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057601 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057612 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05759978 +0000 UTC m=+656.750264498 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057632 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057623531 +0000 UTC m=+656.750288209 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057648 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057669 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057663382 +0000 UTC m=+656.750327990 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057695 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057702 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057712 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057724 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057428 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057572 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057724 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057718324 +0000 UTC m=+656.750382952 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057876 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057771795 +0000 UTC m=+656.750436433 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057900 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057890248 +0000 UTC m=+656.750554906 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057959 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.058002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.058028 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.058065 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.058091 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058168 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058196 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.058188837 +0000 UTC m=+656.750853455 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058238 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058262 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.058255079 +0000 UTC m=+656.750919697 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058295 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058306 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058318 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05831096 +0000 UTC m=+656.750975558 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058354 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.058344461 +0000 UTC m=+656.751009089 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058357 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058383 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.058377552 +0000 UTC m=+656.751042170 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.159504 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.159673 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.159733 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.159861 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.159879 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.159933 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.159974 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.159953133 +0000 UTC m=+656.852617871 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160056 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160073 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160085 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160137 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160122398 +0000 UTC m=+656.852787016 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160061 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160216 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160264 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160335 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160361 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160384 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160401 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160415 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160428 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160438 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160461 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160476 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160484 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160497 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160415 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160404996 +0000 UTC m=+656.853069594 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160540 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.16053091 +0000 UTC m=+656.853195498 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160552 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160388 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160554 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.16054813 +0000 UTC m=+656.853212718 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160596 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160588171 +0000 UTC m=+656.853252759 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160613 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160606452 +0000 UTC m=+656.853271120 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160837 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160825658 +0000 UTC m=+656.853490346 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160874 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160919 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160966 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160994 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161006 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160996343 +0000 UTC m=+656.853661061 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161038 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161059 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161073 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161080 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161119 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161131 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161133 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161139 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161160 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161171 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.161161908 +0000 UTC m=+656.853826536 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161208 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161234 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.16122758 +0000 UTC m=+656.853892268 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161273 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161341 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161372 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161385 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161445 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.161422015 +0000 UTC m=+656.854086773 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161456 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161469 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161478 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161502 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.161495647 +0000 UTC m=+656.854160265 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161548 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161574 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161628 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161679 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161934 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161964 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161973 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162047 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162053 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162063 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162097 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.162087214 +0000 UTC m=+656.854751842 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162133 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162146 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162155 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162189 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.162180547 +0000 UTC m=+656.854845165 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162155 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162222 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162247 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162271 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162327 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162355 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162364 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162396 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162405 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.162394913 +0000 UTC m=+656.855059631 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162426 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162451 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162474 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162499 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162705 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162712 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162721 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162732 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162745 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162761 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.162752873 +0000 UTC m=+656.855417581 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162856 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162894 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162910 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162918 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162947 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.162938358 +0000 UTC m=+656.855603076 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162984 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163007 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.1630009 +0000 UTC m=+656.855665518 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163007 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163034 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163028701 +0000 UTC m=+656.855693399 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163047 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163075 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163081 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163094 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163082962 +0000 UTC m=+656.855747750 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163116 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163105873 +0000 UTC m=+656.855770531 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163130 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163133 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163126314 +0000 UTC m=+656.855790902 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161082 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163157 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163150484 +0000 UTC m=+656.855815102 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163178 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163171245 +0000 UTC m=+656.855835933 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163194 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163212 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163222 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163215276 +0000 UTC m=+656.855880004 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163237 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163230867 +0000 UTC m=+656.855895575 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163251 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163269 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163277 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163269368 +0000 UTC m=+656.855934066 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163294 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163285908 +0000 UTC m=+656.855950616 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163309 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163322 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163333 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163327219 +0000 UTC m=+656.855991827 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163053 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163349 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.16334246 +0000 UTC m=+656.856007168 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163355 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163367 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163392 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163385961 +0000 UTC m=+656.856050569 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163401 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163416 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163425 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163442 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163451 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163442143 +0000 UTC m=+656.856106761 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163456 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163466 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163491 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163483484 +0000 UTC m=+656.856148162 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163526 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163551 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163545526 +0000 UTC m=+656.856210144 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163594 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163604 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163614 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163637 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163631288 +0000 UTC m=+656.856295896 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163934 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163922136 +0000 UTC m=+656.856586864 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.209036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.209070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.209036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.209150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.209278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.209398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.209506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.209721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.210316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.210714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.211020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.265051 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.265289 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.265336 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.265350 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.265449 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.265423635 +0000 UTC m=+656.958088343 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.266701 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.266898 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267261 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267323 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267583 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267694 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.267669979 +0000 UTC m=+656.960334777 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267438 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267728 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267769 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.267758041 +0000 UTC m=+656.960422719 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.208993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210291 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.211035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.211410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211545 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.211677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.211929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.212254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.213100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.213303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.213316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.214236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.214377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.210101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.210122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.210302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.210580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.210910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.211086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.211605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.211965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.427233 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.209659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.209931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.210189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.210406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.210475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.210651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.210892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211880 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.213062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.214359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.214436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.214653 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.214770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.215342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.215482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.215617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.215761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209278 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.209748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.209761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.209959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.210047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.210178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.210259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.210330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.209585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.209921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.210397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.210510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.210562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.210694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.210868 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.211098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.211110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.211289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.209603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.211448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.211549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.211552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.211865 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.211975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.212242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.212458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.212462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.212668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.212865 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.212879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.213242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.212982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214015 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.215002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.215025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.215074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.215097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.215127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.215661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.221724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.222086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.222352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.222584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.222903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.223183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.223953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.225202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.225413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.225419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.225570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.225690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.226245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.226710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.227090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.227328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.227608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.228634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.228893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.230050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.230120 4183 scope.go:117] "RemoveContainer" containerID="9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.272729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.291619 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.309554 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.353377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.374133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.391290 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.412137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.432056 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.450312 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.469314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.493533 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.528063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.548427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.580703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.597874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.624158 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.644935 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.660446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.678441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.704178 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.723474 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.747950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.766311 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.784418 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.802502 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.827606 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.887004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.909603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.932961 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.956100 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.984708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.007942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.036349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.060298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.084656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.104630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.124106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.147000 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.180891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.204590 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.208872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.209105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.209343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.209487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.210386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.211355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.211362 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.211053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.211877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.211104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.212121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.211142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.212404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.210965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.212583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.213313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.249073 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.272723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.325278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.358963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.384407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.438616 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.451481 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/1.log" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.451620 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb"} Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.465028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.487264 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.512259 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.534289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.557134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.580708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.602307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.621275 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.643473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.660925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.671428 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.671592 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.671620 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.671648 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.671669 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.680214 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.697441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.717333 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.739209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.755713 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.776009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.796024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.814228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.833608 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.858330 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.872556 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.901065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.917405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.939248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.962034 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.986637 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.004638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.056963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.073019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.100671 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.121120 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.136025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.154190 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.208627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.208690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.208964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.208968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211381 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.212208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.212304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.212346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.212904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213545 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.214079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.214166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.223465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.245891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.276459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.294213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.311184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.325979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.342197 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.360517 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.378013 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.397463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.414562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.428270 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.430239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.446119 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.464652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.481684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.497160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.515621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.530951 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.546912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.560681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.576488 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.590894 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.606186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.622268 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.641249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.655300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.670146 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.687590 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.700914 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.719965 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.733304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.750473 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.767552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.783418 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.806909 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.847632 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.891223 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.926932 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.967972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.009327 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.052226 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.088056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.128925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.168055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.209189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.209204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.209421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.209485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.210002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.210125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.210221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.210330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.210629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.210887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.217453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.250113 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.287005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.340495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.366313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.409898 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.448664 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.488369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.527413 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.568209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.606303 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.645419 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.686023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.727454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.768714 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.809265 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.851656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.890551 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.927907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.968662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.007645 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.048984 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.085914 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.126509 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.167512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.208608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.208956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.209213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.209356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.209559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.209697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.209987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.210165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.210358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.210501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.210696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.210947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.211108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.211207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.211351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.211456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.211599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.211729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.211959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.214330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.214374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.214484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.214627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.214700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.214863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.214920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.215308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.215959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.215979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.217088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.217183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.221431 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.248209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.288145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.325022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.369307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.407085 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.445620 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.487062 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.526150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.565870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.622015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.649319 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.694467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.726227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.767050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.816111 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.825128 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.825206 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.825224 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.825250 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.825280 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:57Z","lastTransitionTime":"2025-08-13T19:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.844592 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.849662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.851331 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.851403 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.851450 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.851474 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.851506 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:57Z","lastTransitionTime":"2025-08-13T19:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.867424 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.872876 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.873066 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.873086 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.873106 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.873133 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:57Z","lastTransitionTime":"2025-08-13T19:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.890447 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.892430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.895277 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.895371 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.895387 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.895408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.895429 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:57Z","lastTransitionTime":"2025-08-13T19:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.911046 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.917384 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.917452 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.917477 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.917496 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.917525 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:57Z","lastTransitionTime":"2025-08-13T19:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.930628 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.934081 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.934152 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.974238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.064854 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.080330 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.125372 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.146370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.167079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.208356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.208602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.208751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.208981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.209274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.209458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.209549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.209664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.210167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.210284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.210328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.210412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.210656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.248416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.292560 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.345142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.369067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.412622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.451603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.489734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.529354 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.567941 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.616097 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.650751 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.691277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.728302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.769409 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.808077 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.848614 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.888267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.929602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.972584 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.012287 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.048247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.087204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.127933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.167383 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.206258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.208525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.208583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.208673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.208724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.209102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.209347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.209630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.209742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.212073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.212218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.212388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.212482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213280 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.213463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.214035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.214183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.214249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.251501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.288162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.209238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.209476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.209657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.209762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.209988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.210083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.210163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.210309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.210394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.210462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.210709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.211181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.211344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.430758 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.210080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.210281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.210463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.210533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.210636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.210716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.210948 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.211036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.211147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.211232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.211375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.211448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.211546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.211611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.211706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.211882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.211944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.212059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.212238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.212447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.212893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213884 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.214065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.214145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.214559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.214664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.215003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.214265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.215052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.215177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.215258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.215597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.215881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.216210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.216395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.216599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.216936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.217097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.217164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.217181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.217316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.217357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.217587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.208890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.210581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209884 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.213175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.213282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.213387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.213443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.213555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.213621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.214416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.214499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.214676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.214933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.214978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.215105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.215425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.215587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.216146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.216269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.216487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.216946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.217040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.217139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.217304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.217472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.217669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.218500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.218629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.220023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.208693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.208749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.209067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.209317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.209414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.209481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.209655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.209660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.209743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.209859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.209675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.210643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.210695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.208628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.208734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.208859 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.209647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.209969 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.212031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.212097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.213021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.213345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.213413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.215129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.232749 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.249553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.265949 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.288121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.303250 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.325324 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.346055 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.364583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.385025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.400961 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.420532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.432664 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.438473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.456304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.474210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.502157 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.523232 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.542231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.559857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.577721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.595336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.610971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.628535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.644528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.660739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.687394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.706478 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.726959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.745459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.761669 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.793258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.810717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.826241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.844393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.865729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.880658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.905939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.924566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.943441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.958690 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.976536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.991988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.009247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.029199 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.051684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.074064 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.091026 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.108384 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.127712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.161603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.176522 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.193566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208854 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208852 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.208969 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.226697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.247553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.262937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.280143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.297460 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.314589 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.329411 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.344491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.363875 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.384139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.399159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.416480 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.431613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.446542 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.465060 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.208974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.208974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.210125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.210390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.210542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.210670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.211234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.211269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.211082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.211379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.212433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.213145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.213496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.213689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.214999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.215462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.215645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.216002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.216212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.216246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.216365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.216559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.216700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.217137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.217521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.217700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.218090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.218442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.218567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.218929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.218942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.219117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.219420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.219512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.219654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.219960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.220171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.221466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.221482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.221573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.221653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.221914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.221997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.222148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.222157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.222557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.222682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.222882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.223037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.112494 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.112560 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.112579 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.112602 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.112629 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:08Z","lastTransitionTime":"2025-08-13T19:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.127077 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.133043 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.133096 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.133115 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.133137 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.133163 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:08Z","lastTransitionTime":"2025-08-13T19:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.149139 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.154577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.154626 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.154648 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.154671 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.154695 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:08Z","lastTransitionTime":"2025-08-13T19:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.170357 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.175049 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.175276 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.175305 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.175408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.175547 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:08Z","lastTransitionTime":"2025-08-13T19:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.194226 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.199715 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.199900 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.199980 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.200001 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.200092 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:08Z","lastTransitionTime":"2025-08-13T19:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.208980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.209164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.209336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.209434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.209500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.209559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.209703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.209950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.210039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.210148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.210372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.210607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.212317 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.212972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.219237 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.219362 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.209751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.210562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.210867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.211145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.211349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.211568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.212124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.211952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.212323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.212548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.212970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.213067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.213255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.213460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.213392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.213759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.214186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.214190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.214527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.214707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.214964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.215074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.216642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.216686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.216869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.217045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.217137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.217332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.217468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.217886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.217972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.218247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.218470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.218530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.218778 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.219103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.219165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.219345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.219531 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.219771 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.219975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.220139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.220232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.220318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.220464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.220859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.220986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.221095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.221170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.221239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.221473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.221512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.221666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.221887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.221974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.222153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.222437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.222549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.222741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.222762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.224671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.144518 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.144678 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.208303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.208509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.208682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.208775 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.208901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.209029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.209032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.209159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.209285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.209393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.209697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.210008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.434340 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.209693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.209916 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210756 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.213078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.213378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.213435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.213699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213771 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.213881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.214018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.214203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.214390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.214585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.214906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.208686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.208753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.208892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.208988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.208990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.209386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.209266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.209285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.209590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.209949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.209980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.210002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.210544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.210624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.210760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.210983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211339 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.214027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.208720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.208925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.209039 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.209202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209260 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.210420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.210547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.210665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.210898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.212015 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.212082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.212182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.213135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.213259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.213416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.213551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.213706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.213907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.213958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.214061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.214857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.215301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.215471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.215595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.215681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.215985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.232917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.257095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.275741 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.294017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.311263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.326082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.350167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.367321 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.386701 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.406995 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.424198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.436517 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.443739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.475644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.490565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.504499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.521441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.546015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.564356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.581878 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.601415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.619904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.636727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.651462 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.670842 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.687560 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.705231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.722704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.746117 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.764567 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.779374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.793760 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.812122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.829014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.844001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.858650 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.874405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.892251 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.911169 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.927621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.944868 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.962649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.979042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.996574 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.012300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.026681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.043512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.058980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.075251 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.094188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.110110 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.131981 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.149296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.164255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.182059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.196450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.209497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.209598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.209908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.210245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.210338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.210471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.210220 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.217025 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.233035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.249264 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.266023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.282951 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.303166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.318633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.338112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.356717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.376128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.405346 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.420631 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.208763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.208956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.208959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.209136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.209362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.209882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.209958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.210104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.210233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.210353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.210482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.210654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210866 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.212068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.212721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.213080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.209490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.209860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.209949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.210026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.210113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.210253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.210463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.620166 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.620735 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.621382 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.621985 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.622493 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:18Z","lastTransitionTime":"2025-08-13T19:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.651260 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.659754 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.659922 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.659944 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.659968 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.660001 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:18Z","lastTransitionTime":"2025-08-13T19:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.683285 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.692271 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.692395 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.692411 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.692680 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.692976 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:18Z","lastTransitionTime":"2025-08-13T19:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.709458 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.716134 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.716282 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.716448 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.716481 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.716598 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:18Z","lastTransitionTime":"2025-08-13T19:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.731537 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.737392 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.737532 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.737635 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.737765 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.738116 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:18Z","lastTransitionTime":"2025-08-13T19:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.752496 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.752555 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.208682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208731 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.208988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210275 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.211160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.211541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.211696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.212077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.212145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.212218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.212268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.212330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.212424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.212645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.212924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.214055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.214366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.214544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.214505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.214769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.215055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.215297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.208438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.208505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.208661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.208983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.209061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.438349 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.210138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209878 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.210995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.211260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.211455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.211477 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.211639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.211739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.211920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.214981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.215172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.208944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.209713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.209766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.209941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.209957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.210044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.210070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.210361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.210455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.210672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.210745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.210889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.211475 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.567394 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/3.log" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.573178 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137"} Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.573927 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.593181 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.607752 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.622102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.648006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.698183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.727766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.752717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.781315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.806877 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.830051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.847684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.865368 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.882685 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.905244 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.923713 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.936009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.952129 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.967511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.984148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.003141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.024410 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.041828 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.065370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.083555 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.103218 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.125183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.144210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.163094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.180890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.199082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.208485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.208528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.208670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.208915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.208977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.209041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.209321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.209547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.209756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.210026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.210118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.210235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.210290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.210439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.210522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.210707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.210953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.211089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.211249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.211418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.211594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.211739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.211891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.211974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212039 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.213043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.213110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.213305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.213429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.214437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.214558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.214478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.214705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.216032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.216118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.216184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.216241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.222706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.250925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.268285 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.288220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.310289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.328407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.347659 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.365228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.382364 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.404866 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.420067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.449433 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.477099 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.502895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.520673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.539114 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.565991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.588006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.608159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.635103 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.655737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.675089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.694536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.718288 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.736921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.756496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.772937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.789085 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.816278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.835668 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.851892 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.867883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.888283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.905277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.923323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.944177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.964361 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.209586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.209989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.210048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.210126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.210222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.210315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.210374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.583497 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/4.log" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.584634 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/3.log" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.589535 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" exitCode=1 Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.589610 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137"} Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.589659 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.591641 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.592274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.611151 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.630162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.650660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.670662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.690138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.711723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.728917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.752108 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.772436 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.791573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.810438 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.825256 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.842180 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.865908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.889759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.905930 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.925144 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.941462 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.968054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.993271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.013392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.031716 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.051455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.069413 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.095557 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.113754 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.130412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.148394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.171521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.190070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.206906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.209436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209854 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.212034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.212076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.212132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.212207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.212423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.212702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.213044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.213208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.213385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.213612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.213621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213685 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213879 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214260 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.236087 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.253641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.268482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.285117 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.304254 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.322188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.342421 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.364543 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.384643 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.406592 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.422127 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.435988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.440076 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.457238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.475270 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.495190 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.510984 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.526122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.541479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.557306 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.571163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.588255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.595319 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/4.log" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.613377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.627767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.644326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.661423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.683956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.703188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.718349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.742963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.762330 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.778538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.799734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.829227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.868714 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.912340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.948941 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.988226 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.033058 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.070414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.109327 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.149852 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.193585 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.208930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.232056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.269389 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.311014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.353926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.389309 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.426273 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.468221 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.514729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.552699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.592610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.631348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.668980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.707527 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.747072 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.787953 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.826284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.867460 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.936428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.990071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.007532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.025362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.066572 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.110043 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.153537 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.188630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.208995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.209402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209442 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.209544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.209932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.210127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.210251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.210433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.210543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.210767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212852 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.213635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.213748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.214508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.215028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.214862 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.215353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.216063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.216464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.216506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.216572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.216710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.227563 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.269975 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.307988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.346944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.389491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.430501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.469200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.509052 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.549620 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.586342 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.629426 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.667841 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.708647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.753720 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.788652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.827499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.866369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.909362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.947887 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.991220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.028611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.069532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.108221 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.150567 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.186164 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.208282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.208553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.208716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.208864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.209076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.209143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.209422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.230106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.269551 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.307370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.348529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.390113 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.429726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.469704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.509945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.545067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.589112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.625949 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.077248 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.077744 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.077989 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.078215 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.078358 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:29Z","lastTransitionTime":"2025-08-13T19:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.092331 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.097414 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.097465 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.097481 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.097500 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.097527 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:29Z","lastTransitionTime":"2025-08-13T19:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.111095 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.115351 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.115577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.115706 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.115885 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.116049 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:29Z","lastTransitionTime":"2025-08-13T19:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.129256 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.133742 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.133881 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.133898 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.133916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.133942 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:29Z","lastTransitionTime":"2025-08-13T19:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.146308 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.150916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.150973 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.150990 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.151009 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.151029 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:29Z","lastTransitionTime":"2025-08-13T19:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.165069 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.165121 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.208670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.208880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209513 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.212145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.212985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.214030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.214092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.214160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.623087 4183 generic.go:334] "Generic (PLEG): container finished" podID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerID="0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839" exitCode=0 Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.623611 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerDied","Data":"0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839"} Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.208656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.208912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.208946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.208985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.209364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.441591 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.630111 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02"} Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.650906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.675255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.693444 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.709315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.732295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.780334 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.799883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.817633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.832910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.850378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.867869 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.885717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.903042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.920897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.940193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.957944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.975079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.993986 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.009763 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.024521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.045401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.062640 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.082641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.102222 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.127310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.144765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.163260 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.183273 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.205641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.211272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.211409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.211606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.211680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212948 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.213432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.213614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.213896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.213983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.214048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.214244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.214532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.214585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.214676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.217061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.217155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.229367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.246833 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.265301 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.288169 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.302377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.318284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.340349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.364733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.385035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.401403 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.416662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.430462 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.431706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.437912 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.438012 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.446452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.464122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.479497 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.496936 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.517951 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.535666 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.550668 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.567720 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.581755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.606953 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.624174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.643075 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.670693 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.689319 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.704929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.718165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.733629 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.758219 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.778196 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.797734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.820394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.838613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.860536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.878112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.894491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.911205 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.209052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.209116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.209695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.210010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.210326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.210395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.210564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.210847 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.210947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.210990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.211026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.211128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.211223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.211296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.432020 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.432567 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.208699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.208902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.210372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.210438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.210511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.210529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.210999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211853 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.212053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.212109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.213046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.213079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.213152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.213313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.213434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.432657 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.432750 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.210256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.432582 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.432909 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.208467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.208541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.208681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.208921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.210053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.210221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.210360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.210575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.211084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.211575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.211887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.212344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.228976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.246680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.264431 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.281380 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.305188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.324247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.339954 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.356597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.370849 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.393555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.410696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.427482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.432627 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.433086 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.443071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.443253 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.459449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.476096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.500226 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.515552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.529081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.551895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.569436 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.586751 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.603408 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.620765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.637101 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.657719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.672039 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.690353 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.707181 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.724412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.746962 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.769400 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.790167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.805204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.821278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.841564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.862171 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.877699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.894078 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.913498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.931602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.950005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.970697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.990188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.009600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.054363 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.082422 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.117853 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.136869 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.156267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.175321 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.195651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.209390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.209505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209423 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.209606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.210039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.210254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.210340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.210257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.210526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.216507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.239172 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.257266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.278303 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.293213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.311450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.330961 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.351692 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.373683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.392943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.409003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.429216 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.433245 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.433347 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.467128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.483283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.501753 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.521272 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.208877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.209073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.209233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.209367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209454 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.209715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.210224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.210284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.210382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.210562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.210660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.210755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.211007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.211118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.211206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.211378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.211504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.211688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.211993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.212132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.212305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.212401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.212537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.212706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.212875 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.212944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.214106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.214138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.214140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.214213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.214325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215444 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.216087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.216184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.216331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.216433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.433091 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.433234 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.208859 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.209162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.209457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.209572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.209599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.209467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.209760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.210014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.210311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.210445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.210728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.210890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.210957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.432553 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.432689 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.208937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.209042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.209173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.209981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211038 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210996 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.212146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.212402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.212477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.212502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.212947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.214033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.214224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.214390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.214548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.214895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.215095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.215567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.231024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.245045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.261434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.276633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.301151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.317028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.333623 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.348741 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.367248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.384065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.401651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.417090 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.427450 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.427520 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.427537 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.427555 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.427580 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:39Z","lastTransitionTime":"2025-08-13T19:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.432836 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.432948 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.437111 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.443272 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.448082 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.448152 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.448168 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.448191 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.448214 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:39Z","lastTransitionTime":"2025-08-13T19:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.456699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.463185 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.468328 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.468672 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.468908 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.469149 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.469440 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:39Z","lastTransitionTime":"2025-08-13T19:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.475478 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.485313 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.490333 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.492504 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.492869 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.493212 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.493577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.493911 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:39Z","lastTransitionTime":"2025-08-13T19:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.508022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.510746 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.516420 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.516489 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.516510 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.516539 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.516571 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:39Z","lastTransitionTime":"2025-08-13T19:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.527094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.538002 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.538601 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.542631 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.558296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.571119 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.594431 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.611651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.626397 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.648604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.664625 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.681279 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.695379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.712153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.733239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.746960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.760668 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.790297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.815297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.842519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.867012 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.882724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.898521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.915112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.934385 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.952227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.968554 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.983660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.999704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.013880 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.037096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.052019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.067281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.081054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.099048 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.116529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.134155 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.144190 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.144297 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.150093 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.169843 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.184050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.197481 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.208630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.208660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.208704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.208926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.208957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.209130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.209169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.209610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.214938 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.236241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.257580 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.281956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.300995 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.318871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.336729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.351079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.373371 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.389739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.406613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.430230 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.432939 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.433033 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.444450 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.666554 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/2.log" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.667349 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/1.log" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.667412 4183 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb" exitCode=1 Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.667440 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb"} Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.667474 4183 scope.go:117] "RemoveContainer" containerID="9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.667995 4183 scope.go:117] "RemoveContainer" containerID="8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.668458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.817399 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.833153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.854704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.870102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.895697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.911390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.927015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.942995 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.958510 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.972748 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.987283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.002603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.017928 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.031162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.048549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.065241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.080681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.098948 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.112276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.129425 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.146903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.166703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.187548 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.205905 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.209529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.209707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.210137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.210331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.210498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.210591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.210917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.211160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.211338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.211448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.211657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.211733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.212257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.212388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.212561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212865 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.212896 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.212971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.213066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.213066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.213188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.213363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.213490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.213383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.213609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.213944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.214629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.215184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.233630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.253210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.270092 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.291469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.308630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.327316 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.343225 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.361917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.380121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.397201 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.418045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.433110 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.433545 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.440924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.457607 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.475497 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.491204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.510182 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.545711 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.583944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.623729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.661893 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.674930 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/2.log" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.705079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.753198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.783123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.822335 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.861983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.907293 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.944270 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.982765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.023703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.064041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.105425 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.140655 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.182237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.208954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.208956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.209025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.209252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.209368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.210663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.226494 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.395062 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.417003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.433565 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.433719 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.441649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.483633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.523251 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.546872 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.564833 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.581658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.599446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.209660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.209889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.211086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.211348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.211562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211860 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.211911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.212218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.213564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.213595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213849 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.436268 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.436381 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.208997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.209325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.209482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.209578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.209696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.209957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.210019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.210118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.433166 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.433303 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.209002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.209525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.209859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.212321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.211280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.211645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.212636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.212771 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.213496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.213566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.214026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.214564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.215036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.215751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.216119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.228506 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.244413 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.266523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.282344 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.301419 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.317484 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.343094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.366623 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.390910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.412466 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.428976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.432900 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.432971 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.444519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.445495 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.462054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.483249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.536918 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.553242 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.567225 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.583327 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.600851 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.617265 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.632915 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.649493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.665401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.680595 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.703715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.722916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.739844 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.758872 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.778887 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.799107 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.818013 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.839415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.857183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.874746 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.892990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.913007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.938071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.957247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.976381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.994336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.010962 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.026618 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.041672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.057057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.071375 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.087459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.110382 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.139623 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.163200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.180862 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.200285 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.208721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.208894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.208979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.209227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.209320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.209441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.209690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.223042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.244713 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.263180 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.281184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.303077 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.319318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.335462 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.352247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.371636 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.393350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.412632 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.432418 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.432567 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.433664 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.451459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.482964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.504596 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.528661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.208446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.208539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.208555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.208451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.208493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.208708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.209311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.209347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.209722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.209863 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.210095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.210996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.212167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.212203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.212715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.212881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.213080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.213201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.213538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.213565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.214443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.214502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.215486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.432346 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.432469 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.208694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.208915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.209013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.209091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.209140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.209199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.209268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.209340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.432761 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.433012 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.209682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.209869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.210038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.210272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.210593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.210746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.210986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211830 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.212237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.212266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.212327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.212410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.213090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.213344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.213473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.213504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.214065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.214421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.214638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.214733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.214909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.432345 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.432468 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.597393 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.597506 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.597524 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.597543 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.597563 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:49Z","lastTransitionTime":"2025-08-13T19:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.619535 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.624933 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.625030 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.625050 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.625070 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.625101 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:49Z","lastTransitionTime":"2025-08-13T19:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.639740 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.645370 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.645557 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.645577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.645658 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.645694 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:49Z","lastTransitionTime":"2025-08-13T19:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.662090 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.667611 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.667687 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.667704 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.667724 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.667756 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:49Z","lastTransitionTime":"2025-08-13T19:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.680742 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.684915 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.684964 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.684978 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.684999 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.685021 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:49Z","lastTransitionTime":"2025-08-13T19:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.698982 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.699034 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.209194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.209380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.209680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.209915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.210023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.210092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.210147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.210241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.210489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.210637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.210732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.211014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.211189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.433245 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.433396 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.447537 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.209408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.209654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.209963 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.213002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.213510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.213753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.214126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.214436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.214531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.214598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.215184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.433111 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.433306 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.208925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.209042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.208949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.208998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.209004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.209359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.209552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.210090 4183 scope.go:117] "RemoveContainer" containerID="8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210513 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.246218 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.273189 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.297541 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.315907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.336407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.356375 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.385124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.417574 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.435138 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.435236 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.442150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.460014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.475110 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.498992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.521125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.544454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.561002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.580187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.606467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.622969 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.638573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.663627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.682266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.698573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.713371 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.731418 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.748446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.764586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.780340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.798676 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.821154 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.840497 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.858177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.876713 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.897604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.919472 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.938545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.958184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.974657 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.991260 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.007552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.024227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.041917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.060233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.078692 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.095654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.114681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.132080 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.147188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.164064 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.181695 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.196662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208458 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.208767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.208979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.209103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.209425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.209898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.209924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210606 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.211121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.211256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.211268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.211322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.211447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.211630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.211920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.212422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.212530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.212582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.212874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.212998 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.213036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.213231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.213547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.214002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.219682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.234186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.249949 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.276276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.293737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.307024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.324685 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.340397 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.360625 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.383583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.403511 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.422232 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.433879 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.434484 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.441560 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.458694 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.476080 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.496085 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.514714 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.208482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.208630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.208724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.208950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.209019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.209088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.209385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.209648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.433266 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.433358 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.672919 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.673057 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.673077 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.673115 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.673144 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.208704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208941 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.209029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208589 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.209732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.209943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.210311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.210479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.210963 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.211037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.211097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.211487 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211839 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.211898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.212034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212369 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.212374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.212650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.213044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.213469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213846 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.213901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.215055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.215153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.238121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.255378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.272524 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.289503 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.305168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.324147 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.341375 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.360200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.375966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.393325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.435869 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.436416 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.437696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.449085 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.462850 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.494154 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.513435 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.529023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.546387 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.562010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.577598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.592148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.605024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.621065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.635968 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.654699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.673109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.694539 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.709601 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.727622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.745645 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.760696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.778891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.797894 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.820044 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.837558 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.851065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.867307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.893494 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.911947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.928402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.945108 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.964266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.981029 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.998336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.015261 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.032421 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.049283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.072432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.088004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.103649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.121709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.136165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.161507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.181985 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.197316 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.210233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.211047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.211303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.211420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.210617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.217469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.234617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.249469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.264258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.285044 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.301282 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.317289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.332663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.349009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.374611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.395767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.416526 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.433386 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.433965 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.434475 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.451153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.208636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.208935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.209095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.209377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.209607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.209702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.209896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210278 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.212493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.212656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213851 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.214006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.214448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.214581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.214702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.431877 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.432015 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.209929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.210159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.210328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.210449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.210708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.211050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.211055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.433730 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.434297 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.208606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.208699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.208979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.211030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.211281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.211308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.211452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.211760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.212226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.212380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.433137 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.433251 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.978168 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.978240 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.978261 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.978284 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.978318 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:59Z","lastTransitionTime":"2025-08-13T19:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.997328 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.002438 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.002510 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.002531 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.002556 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.002589 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:00Z","lastTransitionTime":"2025-08-13T19:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.017464 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.022185 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.022245 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.022262 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.022280 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.022300 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:00Z","lastTransitionTime":"2025-08-13T19:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.037334 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.042236 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.042482 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.042747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.043131 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.043354 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:00Z","lastTransitionTime":"2025-08-13T19:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.058106 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.063026 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.063344 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.063524 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.063673 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.063949 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:00Z","lastTransitionTime":"2025-08-13T19:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.078984 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.079331 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.208388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.208641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.208921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.209005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.209144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.209262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.209386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.209539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.209657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.209770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.210013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.210097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.210204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.210277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.432038 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.432153 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.451444 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.209357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.209447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.209524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.209639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.209898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.210147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.210564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.210710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.211411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.211487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.211490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.211590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.211732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.211903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.211905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.211932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.211999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.213037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.211570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.214019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.214036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.215024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.215107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.216209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.216274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.216364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.433735 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.434016 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.208733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.209024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.209087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.209353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.209478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.209664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.209960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.210142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.433762 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.434000 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.209200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.209439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.209595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.209669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209763 4183 scope.go:117] "RemoveContainer" containerID="8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.209920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209941 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.210246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.210517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.210738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.210757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.210990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.211208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.211266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.211709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.211920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.212161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.212666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.212708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.212908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.212951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.434344 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.434988 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.775722 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/2.log" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.775967 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791"} Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.803302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.821177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.842350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.860978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.879569 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.899942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.918966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.936621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.953349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.969879 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.989463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.013992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.030651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.047588 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.063650 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.078645 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.092414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.109317 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.123153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.142874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.158244 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.177134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.196000 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.208906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.208982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.209059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.209182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.209306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.209379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.209452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.217651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.234592 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.252427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.268109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.285023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.300514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.318412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.334358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.351593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.368405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.386606 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.401922 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.424902 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.431854 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.431983 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.443429 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.460615 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.475599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.489891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.511979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.532745 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.549638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.563896 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.584192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.611929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.630848 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.649600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.671898 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.689530 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.704682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.727601 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.747214 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.764618 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.794921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.811659 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.829037 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.845755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.864724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.887374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.912999 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.938070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.964206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.987029 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.008924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.048697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.071536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.209551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.209897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.210089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.210390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.210546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.210704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.211204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.211352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.211499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.211616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.211733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211875 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.212107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.212337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.212555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.212662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.212941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.213224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.213283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.213330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.213938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.214303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.214501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.215103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.215138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.216133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.218362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.219550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.220093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.220188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.220294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.234314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.256088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.283576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.299983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.316689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.341612 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.363210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.380325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.397248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.413567 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.432658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.434240 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.434383 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.453270 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.453517 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.470353 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.487686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.503990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.519461 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.533579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.548224 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.565100 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.581076 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.592064 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.607745 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.622422 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.640858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.655570 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.678692 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.720408 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.744215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.775118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.797041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.814911 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.831856 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.847576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.863379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.879451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.912946 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.950900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.991453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.032447 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.079265 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.109344 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.149366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.201227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.208975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.209271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.209498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.209710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.209895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209960 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.210035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.210377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.210544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.231338 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.276045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.314026 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.358928 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.389035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.430025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.432278 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.432376 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.480965 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.512121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.556944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.593513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.633299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.669519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.707970 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.750714 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.792924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.830983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.871963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.908731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.950109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.993932 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.031844 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.075245 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.112490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.149844 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.208728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.208914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.208990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.208931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.209140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.209635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.209870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.210091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210960 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.210282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.210539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210850 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.210713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.213011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.213055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.213094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.213522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.215287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.216176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.216363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.216516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.216645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.216770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.217030 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.217249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.217111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.217188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.218044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.433453 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.434455 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.209701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.209475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.210434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.210752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.211479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.211624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.212228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.434946 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.435084 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.209336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.209478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.209629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.209930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.210038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.210126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.210304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.210412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.210576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.210670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.210876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.211187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.211425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.211628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.211872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.212005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.212286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.212584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.213118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.213489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.213501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.213968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.213982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.214302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.214455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.214582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.214729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.214977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.432705 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.432882 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.143707 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.143881 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.143938 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.144597 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.144897 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9" gracePeriod=600 Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.209607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.209877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.210135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.210178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.210213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.210413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.210434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.210546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.210608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.210645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.210688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.211024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.211063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.211541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.308048 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.308269 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.308359 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.308450 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.308566 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:10Z","lastTransitionTime":"2025-08-13T19:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.326145 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.332704 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.332889 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.332919 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.336453 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.336518 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:10Z","lastTransitionTime":"2025-08-13T19:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.357702 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.363927 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.364339 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.364359 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.364386 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.364421 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:10Z","lastTransitionTime":"2025-08-13T19:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.382043 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.397303 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.397748 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.397973 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.398139 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.398349 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:10Z","lastTransitionTime":"2025-08-13T19:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.415828 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.422164 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.422246 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.422262 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.422284 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.422311 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:10Z","lastTransitionTime":"2025-08-13T19:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.433273 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.433357 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.441424 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.441485 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.455729 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.810166 4183 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9" exitCode=0 Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.810253 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9"} Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.810292 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665"} Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.847565 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.873044 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.896125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.915257 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.934393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.958094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.976658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.997966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.032262 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.048311 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.069555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.086538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.109033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.135406 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.156672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.176003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.197306 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.209331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.209542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.209728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.209909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210848 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210862 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.212481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.213324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.214085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.214165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.218852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.219132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.223001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.241735 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.259621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.275697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.291483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.307681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.323854 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.350850 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.369439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.387483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.411092 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.431332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.434505 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.435068 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.451426 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.466193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.483766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.501406 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.518467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.536583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.551408 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.570010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.590057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.608920 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.624312 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.641483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.655473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.671871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.690397 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.706352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.721870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.736900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.752144 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.769432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.785206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.802281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.819192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.835712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.851680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.867529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.884369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.900742 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.918433 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.932581 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.950350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.971433 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.992175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.007187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.025746 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.039994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.055988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.074367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.208281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.208361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.208280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.208320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.208519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.208536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.208650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.209105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.209405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.209565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.209738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.210017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.210353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.432897 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.432992 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208860 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.209187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.209386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209474 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.209670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.209862 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211849 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.212205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.212205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.212269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.212665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.212983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.213151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.213218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.433466 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.438703 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.208916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.208924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.209968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.208965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.209063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.210364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.210500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.209086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.209099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.209115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.210073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.211065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.211235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.433232 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.433414 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.208967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.209270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.209546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.209380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.209728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.209941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.211049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.211158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.211293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.211390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.211901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.212084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.212376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.212578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.212670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.212770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.214086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.214180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.214266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.214380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.215124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.215252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.432766 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.433117 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.457098 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.948590 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.970729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.989282 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.005768 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.024998 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.041209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.129289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.145290 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.161627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.177245 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.193900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.206700 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.208565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.208700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.208937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.209114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.209266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.209279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.209454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.209322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.225336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.242202 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.261068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.279284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.296508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.315717 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.332078 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.348731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.371182 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.387672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.404528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.419910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.433461 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.433592 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.435762 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.459647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.483350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.506649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.525901 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.543177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.561967 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.579687 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.596668 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.612126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.628544 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.653100 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.678081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.697242 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.712910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.727298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.746461 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.764908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.782877 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.801199 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.820496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.838348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.862140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.894723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.933093 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.955356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.072663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.089741 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.107102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.126165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.145950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.163647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.184104 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.202262 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.208713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.208958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.209141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.209283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.209471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.210008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.210118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.210267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.210375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.210522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.210627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.210886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.210984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.211018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.211164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.211284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.211296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.220350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.220413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.220660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.220751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.220891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.220946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.220988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221039 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.221157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.221364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.221451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.221634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.221731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.223708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.222505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.222715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.224190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.224227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.224335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.224453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.224559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.224626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.223128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.223282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.223383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.227690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.227979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.228641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.229021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.229102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.229206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.229343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.239184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.255593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.273195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.289897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.308697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.327454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.343876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.362854 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.433025 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.433160 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.208924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.208986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.209080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.209189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.209498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209545 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.209554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.432766 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.432929 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.209357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.209680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.209995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.211053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.211084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.211117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.211210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.211417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.211544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.211683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.211946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.212066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.212150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.212185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.214186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.214291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.214899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.214944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.215041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215339 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215427 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.216048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.216153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.435375 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.435480 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.208349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.208516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.208623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.208761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.208972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.209156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.209296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.433025 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.433180 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.459145 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.705929 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.705968 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.705985 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.706007 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.706032 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:20Z","lastTransitionTime":"2025-08-13T19:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.724535 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.729937 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.730024 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.730046 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.730069 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.730097 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:20Z","lastTransitionTime":"2025-08-13T19:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.751424 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.756916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.757003 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.757024 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.757050 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.757089 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:20Z","lastTransitionTime":"2025-08-13T19:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.773216 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.780641 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.780890 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.781013 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.781142 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.781255 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:20Z","lastTransitionTime":"2025-08-13T19:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.801999 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.809520 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.809563 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.809578 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.809602 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.809629 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:20Z","lastTransitionTime":"2025-08-13T19:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.824236 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.824658 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.209130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.209216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.209342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.209513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.209729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.209992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.210217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.210335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.213534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.213640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.213766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.213876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.213947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.214021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.214196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.215127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.215733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.215743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.215976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.216159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.216304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.216456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.216735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.217024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.217116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.217281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.217354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.217465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.217584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.218245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.218352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.218541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.218726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.218999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.219126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.219282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.219421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.219581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.219714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.219994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.220133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.220277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.220391 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.220511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.220651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.220938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.222074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.223005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.223099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.223139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.223203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.223310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.223424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.223575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.223737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.224092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.224271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.224418 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.225139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.224490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.432309 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.432416 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.208763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.209433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.209688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.209962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.210110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.210227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.210332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.210405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.210494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.210587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.210693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.433421 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.433528 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.209705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212858 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.213067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213275 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214369 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.433753 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.433921 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.208690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.208870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.209052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.209119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.209264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.209340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.432268 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.432355 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.208607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.208871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.209019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.209032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.211000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.211075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.211116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.211162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.212344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.213070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.213254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.230134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.248533 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.264479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.285660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.303173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.326573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.366940 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.401501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.422077 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.432471 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.432600 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.440014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.456889 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.461004 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.472347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.493680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.511343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.527449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.540308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.556926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.570211 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.584470 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.598524 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.619271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.634931 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.655973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.673994 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.690758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.708883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.725130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.743404 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.760254 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.775733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.790392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.813140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.830011 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.846862 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.861042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.875979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.893098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.908426 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.929269 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.944564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.959600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.978018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.996040 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.013049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.027542 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.041978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.069309 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.095303 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.109062 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.128023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.145220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.160576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.177680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.194514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.209160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.209371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.209569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.209651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.209897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.210011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.210079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.210145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.210213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.210552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.211132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.211137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.213477 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.230062 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.246238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.261600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.279484 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.299522 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.318516 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.335201 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.352749 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.368535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.386111 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.402332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.417747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.433402 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.433887 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.208703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.209096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.209112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.209157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.209305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.209341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.210271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.216354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.216635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.216915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.217107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.217246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.217406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.217885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.217996 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.218288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.218568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.218685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.218874 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.218929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.219000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.219012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.219108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.219169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.219285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.219466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.220228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.220294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.220402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.220465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.220553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.220649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.220719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.220907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.221026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.221442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.221578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.221741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221859 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.221743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.222294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.222493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.222747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.222951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.223225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.223350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.223408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.432437 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.432510 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.209514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209381 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.209842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.210184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.210507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.210584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.211107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.211256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.432638 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.432855 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.209644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211365 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.212094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.212271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.212599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.216039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.216224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.216439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.216591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.216754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.217009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.433510 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.433741 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.210146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.210268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.210453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.210482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.210522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.210656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.210714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.211387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.212363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.213176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.213349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.213538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.213895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.214069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.433722 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.433901 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.462856 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.033020 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.033093 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.033108 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.033128 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.033152 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:31Z","lastTransitionTime":"2025-08-13T19:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.050258 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.056305 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.056339 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.056355 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.056373 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.056401 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:31Z","lastTransitionTime":"2025-08-13T19:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.071383 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.076711 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.077010 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.077213 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.077384 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.077577 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:31Z","lastTransitionTime":"2025-08-13T19:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.093739 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.099145 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.099212 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.099230 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.099250 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.099270 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:31Z","lastTransitionTime":"2025-08-13T19:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.113961 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.119626 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.119677 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.119692 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.119710 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.119732 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:31Z","lastTransitionTime":"2025-08-13T19:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.133861 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.134301 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.209248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.209439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.209695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211998 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.212289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.212307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.212504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212913 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.213480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213913 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.216046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.433716 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.434761 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.209495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.209728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.209890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.209995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.210077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.210165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.210251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.433040 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.433194 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.208931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.209172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.209383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.208935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.209602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.209717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210454 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210859 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.211062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.211140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.211460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.211726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.211873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.212005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.212495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.212544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212862 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212969 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.213116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.213417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.213640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.214135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.214387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.214417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.214473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.214503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.433034 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.433302 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.208480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.208606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.208638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.208614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.208640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.209034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.209148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.209243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.209485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.209980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.210133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.210630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.210712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.210751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.211232 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.211682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.432038 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.432175 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.208876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.208963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.208977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.208963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.209272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.209390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.209494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.209637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.209902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.212072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.212248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.212883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.231367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.259403 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.279430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.306653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.345733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.376635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.399440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.414891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.429115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.431071 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.431150 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.445895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.461369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.464272 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.479367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.497942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.515475 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.532403 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.547528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.566078 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.587104 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.604306 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.619053 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.634533 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.651601 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.669017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.687095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.707145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.728680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.745283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.763651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.779627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.795344 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.818673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.831405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.848366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.863890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.880017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.897599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.910502 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.933138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.946708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.965636 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.983686 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.999723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.018199 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.032680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.049943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.070090 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.085296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.103914 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.119738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.136328 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.150407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.167647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.183476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.207498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.208390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208487 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.208873 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.208933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.209011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.209104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.209195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.209287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.209366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.228186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.246442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.261852 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.277960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.303487 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.327849 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.352335 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.369874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.394766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.432518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.433012 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.433157 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.454684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.472545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.494317 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.210099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.210190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.210228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.210242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.210270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.210360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.209697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.210565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.210742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.210998 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.211190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.211333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.211443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.211638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.211854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.212052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212969 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.213563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.213677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.214264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.214544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.214630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.214728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.214637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.214672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.215266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.215761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.215893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216849 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.432700 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.432893 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.209425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.209723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.209957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.210060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.210174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.210301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.210415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.210646 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.434304 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.434764 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.209609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.209754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.209984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209996 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.211103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.211238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.211398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.211598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.213106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.213515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.214035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.214120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.214203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.215288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.215478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.213960 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.432882 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.433301 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.208691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.208745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.208688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.208722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.208964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.209056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.209095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.209166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.209294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.209659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.209887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.210034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.432324 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.432462 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.465764 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.209498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.209767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.210253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.210439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.210636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.211019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.211338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.211690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.212116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.212430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.212640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.212737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.213026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.213248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.213338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.213488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.213689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.214050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.214595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.214638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.214710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.215217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.215322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.215496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.215601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.215952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.216061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.216232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.216272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.216348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.216518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.217574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.217639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217841 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.218035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.218133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.218168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.218509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.218910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.219199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.219299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.219430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.219630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.220049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.220317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.220626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.220768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.220704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.435116 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.435243 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.502069 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.502140 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.502160 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.502189 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.502219 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:41Z","lastTransitionTime":"2025-08-13T19:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.522002 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.526729 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.526909 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.526933 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.526959 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.526986 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:41Z","lastTransitionTime":"2025-08-13T19:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.542164 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.547568 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.547887 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.547938 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.547972 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.548000 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:41Z","lastTransitionTime":"2025-08-13T19:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.608295 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.614594 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.614715 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.614735 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.614756 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.614865 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:41Z","lastTransitionTime":"2025-08-13T19:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.629391 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.636401 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.636502 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.636529 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.636556 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.636584 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:41Z","lastTransitionTime":"2025-08-13T19:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.654760 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.654994 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.209364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.209635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.209746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.209957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.210077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.210246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.210559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.433538 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.433638 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.209020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.209316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.209524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.209678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210850 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.212042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.212048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.212148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.212178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.212257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.212315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.212354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.212431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.212731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213487 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.214101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.214214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.214322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.214464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.214587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.214642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215339 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.431848 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.431962 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.209157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.209561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.209634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.210010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.210394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.210486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.210570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.435268 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.435394 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.210358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.210604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.211512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.211695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.212108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.212274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.212498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.212668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.213998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.214242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.214542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.214760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.215282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.215517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.215922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.216173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.216230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.216326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.216493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.216600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.216687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.216176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.217408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.217739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.218388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.219028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.219300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.219445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.219936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.219997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.220028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.220299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.220382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.220417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.220574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.220688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.221131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.221213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.221292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.221897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.222016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.223938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.224026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.224040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.224164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.224355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.224448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.224457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.224556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.224643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.224925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.224989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.225075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.225149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.225262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.225501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.225873 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.226242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.226522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.226765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.226938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.227048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.227179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.227126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.227481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.227618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.227746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.240007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.259224 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.275105 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.290325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.307423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.323207 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.342074 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.370459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.387924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.402115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.421449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.431642 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.432233 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.438197 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.455730 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.468173 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.474267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.490929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.507429 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.521314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.538859 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.555219 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.569369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.589336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.612362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.632632 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.648393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.663047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.678079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.694226 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.709116 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.723651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.738885 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.754187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.769606 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.786263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.799613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.815253 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.830166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.846436 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.859958 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.877492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.898599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.914895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.931482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.950313 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.966261 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.996191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.023186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.049970 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.072034 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.094562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.119466 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.138111 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.153682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.168901 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.187256 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.205375 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.208338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.208451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.208503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.208596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.208674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.208734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.208974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.209586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.209684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.209878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.209978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.211161 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.230252 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.258103 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.278343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.299250 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.321347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.352887 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.375028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.393393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.412304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.432079 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.432192 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.434107 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.454652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.470021 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.979183 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/4.log" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.983368 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5"} Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.984354 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.004075 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.022911 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.041708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.059232 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.076089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.094130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.114271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.139764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.159564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.177680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.198244 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.208661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.208743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208838 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.208955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.210083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.210228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.210377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.210449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.210547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.210605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.210698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.210842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.210925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.210955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.213019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.213122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.213207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.213299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.385702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.407160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.426706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.431948 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.432059 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.443756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.460142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.476657 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.494628 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.517651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.539526 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.556764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.576728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.601147 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.618215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.634440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.650510 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.666098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.682414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.699579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.713272 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.731917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.746511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.764588 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.785187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.801677 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.820296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.840186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.857902 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.878992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.901394 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.918627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.939003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.963913 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.981263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.990280 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/5.log" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.991066 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/4.log" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.996318 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" exitCode=1 Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.996483 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5"} Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.996545 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.999114 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.004133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.007433 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.050322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.067002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.082673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.102605 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.124526 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.148231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.170532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.187756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.204730 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.208942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.209091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.208964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.209115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.210057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.224150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.240943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.269389 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.284709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.300043 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.316237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.369308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.386124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.424071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.434069 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.434370 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.463229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.506307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.541843 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.583871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.625697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.667690 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.702519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.746164 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.783633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.823565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.863562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.903219 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.944529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.985494 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.003689 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/5.log" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.010720 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.011307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.028365 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.064174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.104760 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.141538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.183514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.208762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.209149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.209633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.209736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209839 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.209411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.212208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.212310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212685 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.214523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.231071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.267169 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.307087 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.344234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.387432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.423284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.432149 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.432526 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.465195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.511085 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.544317 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.597975 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.624935 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.668350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.701641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.744235 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.791191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.830885 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.866166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.903388 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.913241 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.913510 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.913891 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.913906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.914296 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.913918116 +0000 UTC m=+778.606583354 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.914507 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.914483242 +0000 UTC m=+778.607148110 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.914705 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.915104 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.915323 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.915224 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.915430 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.914881 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.916339 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.915604414 +0000 UTC m=+778.608976492 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.916496 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.916479339 +0000 UTC m=+778.609144057 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.917014 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.916995534 +0000 UTC m=+778.609660332 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.918182 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.918297 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.918635 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.918753 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.918839 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.918764944 +0000 UTC m=+778.611429532 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919134 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919165 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919188 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919214 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919260 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919290 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919319 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919345 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919371 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919494 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919520 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919583 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919629 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919675 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919699 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920231 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920279 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920267767 +0000 UTC m=+778.612932585 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920301 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920292518 +0000 UTC m=+778.612957166 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920340 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920372 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.92036342 +0000 UTC m=+778.613028118 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920421 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920457 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920445242 +0000 UTC m=+778.613109930 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920498 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920531 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920520524 +0000 UTC m=+778.613185212 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920568 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920600 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920590326 +0000 UTC m=+778.613255134 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920648 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920681 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920671489 +0000 UTC m=+778.613336297 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920856 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920885 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920900 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920952 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920938726 +0000 UTC m=+778.613603424 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921011 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921047 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921036399 +0000 UTC m=+778.613701317 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921090 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921125 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921116101 +0000 UTC m=+778.613781009 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921173 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921208 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921198484 +0000 UTC m=+778.613863192 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921254 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921283 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921274646 +0000 UTC m=+778.613939364 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921325 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921361 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921350078 +0000 UTC m=+778.614014756 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921408 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921443 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.9214332 +0000 UTC m=+778.614097868 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921489 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921519 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921510553 +0000 UTC m=+778.614175201 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921576 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921591 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921624 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921614996 +0000 UTC m=+778.614279684 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921668 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921699 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921689558 +0000 UTC m=+778.614354236 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921740 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921896 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921878233 +0000 UTC m=+778.614543341 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921959 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921997 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921987116 +0000 UTC m=+778.614651794 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.943583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.983439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.015143 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/3.log" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.015612 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/2.log" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.015720 4183 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" exitCode=1 Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.015750 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791"} Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.015873 4183 scope.go:117] "RemoveContainer" containerID="8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.016392 4183 scope.go:117] "RemoveContainer" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.017160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.022169 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.022569 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023091 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023118 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023144 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023264 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023342 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023367 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.023753 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.023926 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.023907234 +0000 UTC m=+778.716571882 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.023994 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024023 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024015577 +0000 UTC m=+778.716680215 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024065 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024090 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024083329 +0000 UTC m=+778.716747977 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024127 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024152 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024144361 +0000 UTC m=+778.716809119 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024182 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024207 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024200093 +0000 UTC m=+778.716864871 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024247 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024274 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024266485 +0000 UTC m=+778.716931253 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024312 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024341 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024332367 +0000 UTC m=+778.716997005 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024374 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024391398 +0000 UTC m=+778.717056036 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024434 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024480 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024472751 +0000 UTC m=+778.717137389 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.112238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.125847 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.125927 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126026 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126101 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126134 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126181 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126215 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126247 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126280 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126304 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126223 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126349 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126202 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126327 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.126303866 +0000 UTC m=+778.818968554 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126410 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.126398399 +0000 UTC m=+778.819063047 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126428 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.126420499 +0000 UTC m=+778.819085158 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126449 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12643953 +0000 UTC m=+778.819104238 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126492 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126540 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126579 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126636 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126677 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126736 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126749 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126848 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126771 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126886 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126911 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126939 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126955 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126975 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126940 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.126919774 +0000 UTC m=+778.819584572 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126990 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127007 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127009 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.126995766 +0000 UTC m=+778.819660434 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127029 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127041 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.127048 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127059 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126852 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127096 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127066728 +0000 UTC m=+778.819731406 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127109 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127134 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12712121 +0000 UTC m=+778.819785938 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.127138 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127155 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12714568 +0000 UTC m=+778.819810368 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127189 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127167371 +0000 UTC m=+778.819832019 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127207 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127198732 +0000 UTC m=+778.819863380 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127214 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127224 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127216532 +0000 UTC m=+778.819881200 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127233 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127244 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127310 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127296184 +0000 UTC m=+778.819960883 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.127346 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.127386 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.127432 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127458 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127481 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127493 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127553 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127581 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127600 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127612 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127602 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127589303 +0000 UTC m=+778.820253991 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127737 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127871 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12785508 +0000 UTC m=+778.820519768 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127921 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127906822 +0000 UTC m=+778.820571530 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128135 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128197 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128231 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128267 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128268 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128305 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128326 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128312973 +0000 UTC m=+778.820977781 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128361 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128380 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128401 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128390626 +0000 UTC m=+778.821055364 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128404 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128438 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128453 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128440887 +0000 UTC m=+778.821105575 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128473 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128495 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128508 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128522 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128509599 +0000 UTC m=+778.821174327 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128439 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128552 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128578 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128567541 +0000 UTC m=+778.821232239 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128578 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128600 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128611 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128624 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128637163 +0000 UTC m=+778.821301871 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128660 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128685 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128702 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128691084 +0000 UTC m=+778.821355782 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128709 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128728 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128732 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128740 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128771 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128852 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128611 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128856 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128836468 +0000 UTC m=+778.821502216 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128907 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128917 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128878 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128922 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12890901 +0000 UTC m=+778.821573748 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128943 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128964 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128950722 +0000 UTC m=+778.821615360 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128984 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128973962 +0000 UTC m=+778.821638640 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128986 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129030 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129036 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129021774 +0000 UTC m=+778.821686462 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129048 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129060 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129105 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129149 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129190 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129173258 +0000 UTC m=+778.821837946 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129166 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129211 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129226 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129218 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129207419 +0000 UTC m=+778.821872057 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129234 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129152 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129267 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12925417 +0000 UTC m=+778.821918868 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129307 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129294471 +0000 UTC m=+778.821959169 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129337 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129417 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129400375 +0000 UTC m=+778.822065193 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129486 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129533 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129521058 +0000 UTC m=+778.822185736 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129582 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129650 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129686 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129734 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129861 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129874 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129927 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129908259 +0000 UTC m=+778.822573037 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129961 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129977 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.130024 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.130039 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.130080 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.130068084 +0000 UTC m=+778.822732772 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.132218 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.132426 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132528 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132552 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132570 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132595 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.132576685 +0000 UTC m=+778.825241403 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132658 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.132645697 +0000 UTC m=+778.825310385 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132680 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.132670098 +0000 UTC m=+778.825334786 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132703 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.132690988 +0000 UTC m=+778.825355656 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132715 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132732 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.132721179 +0000 UTC m=+778.825385837 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.132538 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132765 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.13274916 +0000 UTC m=+778.825413858 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.132893 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132905 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132922 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132932 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.132953 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132982 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133036 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.133023498 +0000 UTC m=+778.825688346 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133199 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.133186533 +0000 UTC m=+778.825851161 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133273 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133231 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.133217613 +0000 UTC m=+778.825882211 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133340 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.133331537 +0000 UTC m=+778.825996135 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133506 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133544 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.133532552 +0000 UTC m=+778.826197360 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134308 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134361 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134406 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134493 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134541 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134588 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134706 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134926 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134975 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135019 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135054 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135183 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135329 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.135625 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.135688 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.135673293 +0000 UTC m=+778.828338082 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135757 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136185 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136355 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136355 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136473 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136551 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136564 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136574 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136577 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136841 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136919 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137012 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137190 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137288 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137325 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137346 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137362 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137490 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137505 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137514 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137574 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.137551657 +0000 UTC m=+778.830216475 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137617 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.137603969 +0000 UTC m=+778.830268597 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.137980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.138095 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.138165 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138259 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138236237 +0000 UTC m=+778.830900985 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138295 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138281538 +0000 UTC m=+778.830946236 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138494 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138480444 +0000 UTC m=+778.831145132 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138518 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138507944 +0000 UTC m=+778.831172632 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138546 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138529035 +0000 UTC m=+778.831193863 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138568 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138556696 +0000 UTC m=+778.831221394 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138590 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138580156 +0000 UTC m=+778.831244844 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138614 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138599577 +0000 UTC m=+778.831264225 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138635 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138624268 +0000 UTC m=+778.831288946 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138652 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138643238 +0000 UTC m=+778.831307906 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139058 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139079 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139089 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139163 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139222 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138758162 +0000 UTC m=+778.831422820 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139243 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139276 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.139258376 +0000 UTC m=+778.831923084 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139303 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.139290797 +0000 UTC m=+778.831955465 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139333 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.139314887 +0000 UTC m=+778.831979565 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139459 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139483 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139497 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139662 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.139648807 +0000 UTC m=+778.832313505 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.153347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.168408 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.186597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.209627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.210234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.210392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.210525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.210661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.209984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.210110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.234358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.239388 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.239413 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.239425 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.239491 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.239474365 +0000 UTC m=+778.932138983 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.239238 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.239860 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.239921 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.239949 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240043 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240054 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240077 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.240068332 +0000 UTC m=+778.932732950 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240104 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.240091983 +0000 UTC m=+778.932756601 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240154 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240179 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240193 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240238 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.240224957 +0000 UTC m=+778.932889615 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.239976 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240274 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240291 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240324 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.240313619 +0000 UTC m=+778.932978247 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.240356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240438 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240589 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.240579357 +0000 UTC m=+778.933243985 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.240702 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240900 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240922 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240934 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241118 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241135 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241147 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241060 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241160 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241145283 +0000 UTC m=+778.933809931 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241281 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241267677 +0000 UTC m=+778.933932355 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241361 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241434 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241469 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241498 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241507 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241518 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241529 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241531 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241541 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241569 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241559965 +0000 UTC m=+778.934224583 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241603 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241614 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241633 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241636 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241628207 +0000 UTC m=+778.934292825 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241657 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241645127 +0000 UTC m=+778.934309775 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241680 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241690 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241698 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241726 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241717949 +0000 UTC m=+778.934382577 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241873 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241859883 +0000 UTC m=+778.934524631 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242030 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242058 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242106 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242142 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242134141 +0000 UTC m=+778.934798759 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242112 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242175 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242151 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242209 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242225 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242217404 +0000 UTC m=+778.934882022 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242245 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242271 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242264985 +0000 UTC m=+778.934929603 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242453 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242481 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242546 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242557 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242517562 +0000 UTC m=+778.935182150 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242670 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242679 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242697 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242689817 +0000 UTC m=+778.935354405 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242742 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242755 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242764 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242877 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242864322 +0000 UTC m=+778.935528950 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242907 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242934 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242943 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242933234 +0000 UTC m=+778.935597852 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242985 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243097 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243108 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243142 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.24313154 +0000 UTC m=+778.935796278 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243184 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243207 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.243200002 +0000 UTC m=+778.935864620 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242683 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243223 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243245 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.243238833 +0000 UTC m=+778.935903461 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243033 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243279 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243304 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243342 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243366 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243392 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243416 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243449 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243537 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243072 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243605 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.243594013 +0000 UTC m=+778.936258691 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243629 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243660 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.243653255 +0000 UTC m=+778.936317873 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243702 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243730 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243737 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.243727597 +0000 UTC m=+778.936392235 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243758 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243849 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243989 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.244065 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.244151 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.244291 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244388 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244400 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244409 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244437 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244429247 +0000 UTC m=+778.937093865 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244478 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244489 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244498 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244520 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244513969 +0000 UTC m=+778.937178597 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244552 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244576 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244568421 +0000 UTC m=+778.937233039 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244614 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244629 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244636 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244661 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244653833 +0000 UTC m=+778.937318451 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244677 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244670084 +0000 UTC m=+778.937334682 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244714 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244724 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244731 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244754 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244747586 +0000 UTC m=+778.937412214 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244915 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244930 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244938 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244966 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244957402 +0000 UTC m=+778.937622020 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245319 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.245311182 +0000 UTC m=+778.937975920 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245436 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245458 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245476 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245487 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245464 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.245456736 +0000 UTC m=+778.938121364 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245544 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.245532438 +0000 UTC m=+778.938197076 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245563 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.245554229 +0000 UTC m=+778.938218907 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.266125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.302926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.341687 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.345649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.346894 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.346993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347377 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347422 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347481 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.347464637 +0000 UTC m=+779.040129395 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347661 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347704 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347717 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347747 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.347737474 +0000 UTC m=+779.040402172 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.348006 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.348048 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.348059 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.348116 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.348105455 +0000 UTC m=+779.040770163 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.386081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.424660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.433886 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.434003 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.461591 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.469877 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.504766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.544252 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.582352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.621119 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.664313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.705319 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.745699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.784401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.824759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.867726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.905702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.943268 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.984275 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.023422 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/3.log" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.031437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.066347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.102216 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.140548 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.187704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.208662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.208931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.208993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.212036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.212367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.213020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.213106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.213181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.213238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.213299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.229056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.262028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.304446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.346546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.383386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.425427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.434259 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.434360 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.464237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.505191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.544248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.596936 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.626083 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.663548 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.706377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.745141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.786217 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.822228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.862367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.864206 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.864260 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.864275 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.864295 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.864317 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:51Z","lastTransitionTime":"2025-08-13T19:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.881356 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.887980 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.888346 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.888455 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.888651 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.888855 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:51Z","lastTransitionTime":"2025-08-13T19:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.905387 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.909715 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.915192 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.915281 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.915302 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.915325 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.915351 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:51Z","lastTransitionTime":"2025-08-13T19:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.930991 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.935734 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.935872 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.935897 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.935924 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.935952 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:51Z","lastTransitionTime":"2025-08-13T19:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.945186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.951005 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.956062 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.956113 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.956126 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.956144 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.956164 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:51Z","lastTransitionTime":"2025-08-13T19:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.970348 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.970708 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.983754 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.022489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.064920 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.110390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.143981 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.188057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.209282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.209397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.209499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.209630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.209871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.210229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.210335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.225447 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.265255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.397442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.420496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.433919 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.434441 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.439879 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.467628 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.493539 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.558874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.595904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.618063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.638364 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.663620 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.704286 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.743501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.785609 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.823055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.861640 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.911888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.942851 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.982912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.022895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.066123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.104664 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.142471 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.182116 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.209511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.209684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.209844 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.209922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.225002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.260531 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.300846 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.349891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.400562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.439317 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.439448 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.474565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.500498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.536388 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.557510 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.650514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.678939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.705223 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.749555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.769203 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.790200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.821865 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.932385 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.952008 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.968266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.208613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.208709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.208757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.208752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.208585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.209647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.209871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.210037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.210178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.210334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.210357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.210532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.210757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.433355 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.433447 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.674290 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.674438 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.674487 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.674523 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.674544 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.208367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.208480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.208679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.208741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.208934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.210077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.210385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.210412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.210847 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.211102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.211232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.211337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.211492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.211544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.213260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.213704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.214184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.229499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.244422 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.260315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.275985 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.293209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.312525 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.333056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.348308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.364608 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.381594 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.398950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.417978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.432617 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.432738 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.445479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.461423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.471930 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.477542 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.496230 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.526141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.541361 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.559281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.576610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.590130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.606901 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.620106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.658302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.672222 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.690672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.733199 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.764121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.786088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.803209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.821187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.839703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.855249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.873204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.891683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.912246 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.929234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.948525 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.975739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.003162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.019617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.037213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.054994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.070301 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.095497 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.114927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.129125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.144330 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.168599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.193386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.209550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.209653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.209987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.210112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.210203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.210209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.210609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.210727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.227086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.251561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.269053 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.286215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.300984 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.318296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.335696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.352451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.370885 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.386214 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.421858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.432623 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.432714 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.464577 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.504704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.546354 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.583345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.622537 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.209552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.209880 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.211076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.211200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.211351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.211575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.211726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.212005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.212161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.212366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212474 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.212889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.213063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213850 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.215024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.433142 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.433391 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.208698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.208701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.208942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.209321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.209554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.210286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.210460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.210596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.210651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.210428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.211279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.433957 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.434101 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.210357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.210613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.210670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.210951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.210978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.210999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.210979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.211511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.211850 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.211969 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.212165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.212465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.212568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.212857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.213181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.213196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.213972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.214396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.215242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.215212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.215955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.217036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.217144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.432859 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.432994 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.209734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.210585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.210873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.211025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.211086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.211135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.211169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.211643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.211921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.212136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.212152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.212279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.212447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.212611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.212679 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.213536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.432359 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.432441 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.473569 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.209188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.209436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.209654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.209914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.209924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.209980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.210659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.210918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.210961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.210970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.210918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.210991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.214118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.214333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.214664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.215014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.215199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.215425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.432877 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.432969 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.059343 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.059448 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.059466 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.059721 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.059752 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:02Z","lastTransitionTime":"2025-08-13T19:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.075262 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.080759 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.080880 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.080898 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.080919 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.080940 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:02Z","lastTransitionTime":"2025-08-13T19:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.098527 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.106482 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.106586 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.106611 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.106648 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.106692 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:02Z","lastTransitionTime":"2025-08-13T19:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.125104 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.131481 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.131575 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.131598 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.131627 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.131666 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:02Z","lastTransitionTime":"2025-08-13T19:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.149335 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.156171 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.156266 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.156291 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.156315 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.156354 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:02Z","lastTransitionTime":"2025-08-13T19:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.184532 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.184600 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.208552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.208886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.209097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.209115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.209186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.209281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.209382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.432679 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.432963 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.208291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.208920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.208367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.208426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.209610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.211022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.211160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.211276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.211393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.212115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.212222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.212316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.212410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.212423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.212628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.212942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.213037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.213221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.213403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.213604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.213749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.213903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.216238 4183 scope.go:117] "RemoveContainer" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.232063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.247610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.265721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.283497 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.302981 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.323294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.350613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.366223 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.382297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.397395 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.417904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.432142 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.432243 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.434398 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.454439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.471231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.488719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.555332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.572157 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.588757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.603857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.621247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.637302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.653326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.672145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.688405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.705952 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.726645 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.742215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.760405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.780920 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.799180 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.814006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.830004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.846385 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.864209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.901163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.919360 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.935888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.952267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.974215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.989706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.005349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.021434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.036710 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.055051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.071295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.100228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.116890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.131052 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.145063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.163065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.178068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.192517 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.208857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.208922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.209010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.208879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.209173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.209221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.209455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.209491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.209528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.209594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.209884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.209966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.210082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.210164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.211280 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.227991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.242520 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.256518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.274705 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.294055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.311960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.328612 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.346929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.364604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.380202 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.397644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.413517 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.429648 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.433177 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.433363 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.446724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.208705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.209098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.209603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209731 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.209733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.209944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.211213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.211287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.211415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.211636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.211731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211884 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.212005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.212093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.212008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.212223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.212320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.212227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.212544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.212810 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.212923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.214007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.216134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.216167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.216265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.231175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.250362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.265662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.293002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.319971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.355288 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.373403 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.393258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.418137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.434027 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.434105 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.436396 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.451638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.475868 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.480125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.498926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.511933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.528649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.548498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.565532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.582098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.604239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.624152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.646558 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.664405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.687586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.708194 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.730101 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.747442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.766012 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.782908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.800496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.823873 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.847081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.862973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.880015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.899089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.917710 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.935229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.953451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.970115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.988675 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.004078 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.029213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.044491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.058377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.075702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.095342 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.111663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.125131 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.141162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.157732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.171548 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.185550 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.200047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.209383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.209545 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.209701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.209923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.210066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.210181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.210412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.218723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.236574 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.255971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.272664 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.289373 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.304926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.331647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.351437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.368268 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.389485 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.428467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.432070 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.432476 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.467640 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.508950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.547345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.596599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.208959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.209170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.209700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.209889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.210043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.209609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.210348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.210676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210860 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.211947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.212250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.213193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.215282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.213352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.213475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.213658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.213893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.214063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.214187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.214508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.214868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.215112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.215336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.215466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.215607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.215853 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.215889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.215932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.214184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.216061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.216182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.216500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.216598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.216690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.216591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.216605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.217066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.217499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.217582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.218020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.218079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.218157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.218216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.432260 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.432345 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.210061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.432578 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.432676 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.209718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.210010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.210227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.210336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.210552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.210734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.211199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.211419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.211725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.212157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.212455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.212585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.213010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.213195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.213480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.213667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.214057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.214177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.214343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.214516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.214713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.215707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.216014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.215745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.216254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.216341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.216274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.216547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.216627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.215534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.217224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.217421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.217499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.217588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.217753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.218107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.218276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.218309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.218271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.218469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.218925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.218587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.219180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.219181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.220026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.220071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.219523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.220203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.220240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.220244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.219737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.219650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.220354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.221322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.220928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221772 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.222065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.222187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.222290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.222533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.223622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.433538 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.433685 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.211461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.211725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.211854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.211987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.212061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.212140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.212236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.212413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.432199 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.432323 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.477003 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.100349 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.208991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.209251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.209533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.209669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.209920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.212561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.212606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.212620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.213028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.214142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.214152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.214250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.214349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.214965 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.215468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.432630 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.432725 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.208482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.208536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.208740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.208502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.208996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.209120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.209226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.209243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.209376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.209569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.209765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.209948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.312336 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.312499 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.312548 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.312627 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.312682 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:12Z","lastTransitionTime":"2025-08-13T19:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.336305 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.342032 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.342117 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.342139 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.342165 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.342196 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:12Z","lastTransitionTime":"2025-08-13T19:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.356905 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.362189 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.362509 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.362747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.363118 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.363453 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:12Z","lastTransitionTime":"2025-08-13T19:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.382015 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.388729 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.389275 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.389477 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.389883 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.390274 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:12Z","lastTransitionTime":"2025-08-13T19:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.405680 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.413360 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.413707 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.413911 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.414047 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.414164 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:12Z","lastTransitionTime":"2025-08-13T19:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.429545 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.431016 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.431599 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.431959 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.209426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.209514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.209578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.210407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.210513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.210562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.212077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.212178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.212317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.212631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.212691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.212907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.212974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.213267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.213433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.433551 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.434499 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.208428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.208572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.208664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.208746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.209165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.209332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.209415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.209369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.209586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.209710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.209743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.209904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.209992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.432945 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.433072 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.208732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.208904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.210290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.210423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.210501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.210709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.210999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.211016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.211143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.211401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.211416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.211578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.212137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.212360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.212523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.212660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.212901 4183 scope.go:117] "RemoveContainer" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213531 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.213770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.214064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.271578 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.288009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.314696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.339255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.362935 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.379905 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.396993 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.410210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.425332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.431379 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.431510 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.441597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.456529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.473162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.478088 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.491167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.518463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.536989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.550933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.566937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.585086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.598515 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.618050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.638710 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.657374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.675866 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.694321 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.715765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.734584 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.756474 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.776639 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.800921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.823967 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.843718 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.864204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.884223 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.905011 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.934225 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.954983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.971916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.988501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.003422 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.023573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.043104 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.059921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.079408 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.097012 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.119124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.144412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.161658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.177352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.194116 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.208765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.208927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.208861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.209158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.209469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.209866 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.209993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.210130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.210289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.210661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.210731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.210910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.214635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.238417 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.256768 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.279037 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.299457 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.315298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.332228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.350105 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.367126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.384980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.401862 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.416449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.432643 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.432737 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.434215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.452651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.471594 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.492431 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.511474 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.532210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.208995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.209752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.209988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.211041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.211065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.211105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.212648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.213629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.214014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.214345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.214417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.214639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.214881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.215004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.215050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.215133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.215259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.215379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.215440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.215600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.215655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.215977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.216317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.216500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.216645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.220934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.221629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.222623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.434637 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.434881 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.208717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.208958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.209058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.209073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.209205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.209336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.209500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.209679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.432644 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.432873 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.209583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.209729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.209973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.212045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.212171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.212305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.212522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.212851 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.213029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.213088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.213319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.213754 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.213969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.214023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.214029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.214194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.214384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.214527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.214925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.215608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.216116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.216215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.432379 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.432546 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.210140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.434604 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.434755 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.480008 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.211410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.212321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.212768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.213032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.213362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.213603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.213999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.214523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.214677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.214852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.214949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.215026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.215152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.215398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.215504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.215421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.215467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.215900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.215979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.216082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.217382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.217853 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.217908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.218193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.218279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.218758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.219474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.219616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.219636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.219670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.219718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.219759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.219861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.219895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.219986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.220123 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.220143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.221085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.221171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.221326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.223100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.223369 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.433308 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.433915 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.208543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.208636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.208659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.208555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.208591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.208922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.209018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.209128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.209227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.209237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.209358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.210072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.210241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.210949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.433042 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.433731 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.711431 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.711498 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.711516 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.711536 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.711557 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:22Z","lastTransitionTime":"2025-08-13T19:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.727956 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.733520 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.733744 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.733942 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.734119 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.734235 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:22Z","lastTransitionTime":"2025-08-13T19:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.750310 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.756190 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.756271 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.756292 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.756318 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.756354 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:22Z","lastTransitionTime":"2025-08-13T19:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.775457 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.781640 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.781703 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.781719 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.781743 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.781761 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:22Z","lastTransitionTime":"2025-08-13T19:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.799995 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.806295 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.806387 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.806411 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.806435 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.806472 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:22Z","lastTransitionTime":"2025-08-13T19:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.825055 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.825143 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.210511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.210607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.211177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.211391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.211500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.211654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.211981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.212183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.212407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.212540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212840 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.212871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.212931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213888 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.214209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.214443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215275 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.216065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.216135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.216274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.433767 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.433988 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.209318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.209499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.209318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208653 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.209857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.209903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.210019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.210109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.433126 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.433531 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.208657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.208964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.209110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.209267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.209536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.209674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209864 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.209994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.212094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.212226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.212283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.212304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.212368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.212368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.212512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.212589 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.213318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.213468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.213569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.213702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.214381 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.215220 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.215414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.228412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.246437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.265169 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.283258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.316577 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.338447 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.355035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.376554 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.393883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.410875 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.427722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.432051 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.432186 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.447463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.464002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.478635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.481638 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.500612 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.525681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.545535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.561605 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.577200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.596899 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.617866 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.633453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.650299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.668548 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.686041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.701069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.717958 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.734020 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.750552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.767982 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.785708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.808717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.829407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.847739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.868006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.889086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.909030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.925576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.945757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.964566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.984769 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.001131 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.021317 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.039002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.055656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.073092 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.090569 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.110707 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.128750 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.146329 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.165602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.180669 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.196983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.208506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.208681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.208974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.209052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.209169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.209189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.209380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.209472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.209857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.210130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.210362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.210498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.210918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.211061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.211409 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.212017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.217974 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.234638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.248240 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.266141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.279599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.301294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.320499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.340955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.358432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.376079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.395497 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.412336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.428665 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.433750 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.433963 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.448253 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.209938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210477 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.211093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.212218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209089 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.212506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.212871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.214192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.214381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.214573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.214910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.215149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.215328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.215429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.215768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.216167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.216272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.211949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.215985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.216077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.216378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.216491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.218028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.433219 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.433345 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.209148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.209204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.209148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.209169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.209309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.209379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.209561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.210045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.210213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.210254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.210330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.210366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.210479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.210641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.432879 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.432997 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.208586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.208609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.208638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.210451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.210724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.210922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.212105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.212400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.213613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.214013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.214410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211659 4183 scope.go:117] "RemoveContainer" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.217073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.217171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.217242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211866 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.217355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211902 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.218270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.218417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.218504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.218953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219855 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.220154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.218705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.433518 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.434314 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.172465 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/3.log" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.172591 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f"} Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.189584 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.209354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.209424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.209496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.209540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.209606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.210004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.210052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.210150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.210020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.210397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.210405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.210631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.210866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.212703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.229473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.252125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.273569 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.300068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.316046 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.333293 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.352597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.372152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.390719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.407591 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.425401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.432760 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.432946 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.432999 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.434179 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02"} pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" containerMessage="Container router failed startup probe, will be restarted" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.434265 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" containerID="cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02" gracePeriod=3600 Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.443916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.462295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.482482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.484139 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.495955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.509494 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.527689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.542402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.558071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.574222 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.596126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.617001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.637687 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.658970 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.677549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.698057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.714660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.733913 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.748767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.764003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.782511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.807091 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.828143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.846845 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.863386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.880347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.897313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.913133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.937187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.954651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.970318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.991030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.007065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.022281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.038561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.052525 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.078614 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.098370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.114293 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.133737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.153227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.170938 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.188355 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.205023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208423 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.208654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.208737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.208523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.208893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.208984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.210092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.210265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.210707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210836 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210856 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210864 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.210950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.212059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.212229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.212351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.212660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.225973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.243672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.262906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.278759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.300198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.323760 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.344152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.363258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.390700 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.413277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.433133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.208636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.208934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.208963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.208997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.209053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.209067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.209349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.209549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.209651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.209903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.209992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.210200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.210395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.011591 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.011920 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.011944 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.011966 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.012000 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:33Z","lastTransitionTime":"2025-08-13T19:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.031264 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:33Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.037199 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.037444 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.037566 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.037688 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.037889 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:33Z","lastTransitionTime":"2025-08-13T19:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.060851 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:33Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.065963 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.066043 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.066066 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.066089 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.066116 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:33Z","lastTransitionTime":"2025-08-13T19:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.087550 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:33Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.093403 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.093486 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.093500 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.093520 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.093540 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:33Z","lastTransitionTime":"2025-08-13T19:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.107668 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:33Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.113186 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.113241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.113262 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.113285 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.113313 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:33Z","lastTransitionTime":"2025-08-13T19:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.128925 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:33Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.128988 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208653 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208863 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.210223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.210362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.210560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.210664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.211043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.211105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.211350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.211559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.211689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.211751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.211931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.212372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.212732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.212939 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.213049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.213142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.213221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.213348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.213427 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.213670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.214024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.214080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.214171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.214273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.214517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.215581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.216065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.216335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.216394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.216510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.216711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.208864 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.208992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.209088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.208992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.209302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.209478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.209550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.209655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.209759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.210119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.210345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.210610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.208737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.209078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.209237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209474 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.213095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.213240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.209689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.209844 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209868 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.209965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.210062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.210237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.229598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.247083 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.271492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.287404 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.304102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.324512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.346598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.363728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.386112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.404159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.419047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.438455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.457017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.473934 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.485910 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.503259 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.536591 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.577260 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.606515 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.625880 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.645143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.665059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.684499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.702131 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.718974 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.738712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.755947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.775367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.788612 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.805972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.821347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.846705 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.862544 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.878903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.903665 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.918377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.939758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.959140 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.977195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.996896 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.013973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.030155 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.048141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.064376 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.092633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.111030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.135614 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.151971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.166049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.179345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.192850 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.208522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.208753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.209036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.209131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.209319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.209426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.209497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.209597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.209758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.209929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.210046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.210113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.210661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.226267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.240867 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.260632 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.288667 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.302312 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.316731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.330416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.348769 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.367972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.382453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.398560 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.412950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.427473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.441537 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.459557 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.480479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.209255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.209432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.209611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209846 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.209877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.211177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.211271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.211318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.211389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.211442 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.211536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.211575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.211656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.211933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.212098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.212187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.212364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.212550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.212694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.212870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.212897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.213070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.213156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.213458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.213171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.213193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.213298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.213874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.213920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215772 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.208993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.209218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.209291 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.209704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.209722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.210029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.210421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.210504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.212039 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.212633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.208900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.211358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.211409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.211617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.211725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.212046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.212470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.213295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.208415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.208693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.487965 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.209372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.209948 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.213038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.213933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.214161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.214544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.215624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.215751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.215911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.216188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.216497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.216549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.216887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.217043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.217232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.217561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.217738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.217744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.218971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.219169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.208945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.209483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.212882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.214498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.214927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213756 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.212912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215847 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215880 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.216288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.216928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.217083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.217571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.218023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.218100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.218479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.218537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.218897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.219057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.525399 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.525463 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.525481 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.525502 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.525527 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:43Z","lastTransitionTime":"2025-08-13T19:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.545583 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.549591 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.549672 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.549752 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.549889 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.550257 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:43Z","lastTransitionTime":"2025-08-13T19:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.563961 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.567932 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.568007 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.568079 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.568152 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.568178 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:43Z","lastTransitionTime":"2025-08-13T19:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.581710 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.586657 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.586755 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.586897 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.586931 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.586967 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:43Z","lastTransitionTime":"2025-08-13T19:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.603058 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.607984 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.608023 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.608034 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.608053 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.608073 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:43Z","lastTransitionTime":"2025-08-13T19:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.627194 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.627269 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.208693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.208766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.208710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.209015 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.209138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.209167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.210209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.210494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.209525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.209945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.210293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.210704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.211203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.211323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.213333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.214584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.214963 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.215088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.215314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.215321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.215647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.215988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.216197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.216344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.217096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.217899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.218129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.218347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.218622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.218936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.219321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.219501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.219936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.220203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.225121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.225176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.225299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.225358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.234598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.252326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.274895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.293900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.311923 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.325454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.344118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.364002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.383908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.404294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.423966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.439324 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.454437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.475139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.489594 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.491009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.508589 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.528109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.546926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.563106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.579428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.597344 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.613067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.627648 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.645576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.663074 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.679884 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.694595 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.713364 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.730897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.744865 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.761743 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.779617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.799458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.820684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.839895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.856965 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.881500 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.902081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.926453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.949887 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.971894 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.989187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.013545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.036552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.055414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.081184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.101311 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.119677 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.138006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.158192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.180428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.207305 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.208509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.208544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.208719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.208859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.208879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.208947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.209100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.209399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.227698 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.249455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.274009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.294673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.324302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.340935 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.359362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.378906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.402173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.423390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.442699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.465405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.495478 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.515068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.531978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.209038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.209275 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.209498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.209613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.209742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.209911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.210072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.210176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.210307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.210423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.210585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.210683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.210744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.210913 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.211141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.211247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.211369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.211467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.211594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.211693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.211734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.212094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.212591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.212510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.212885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.212973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.214106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.214197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.214341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.214542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.215299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.215359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.216597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.216771 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.217018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.208722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.208849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.208883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.208884 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.208724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.209028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.209064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.209183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.209324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.209543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.209691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.209984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.210183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.210346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.209644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.209942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.210087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.210235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.210517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.210659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.210765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.212077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.214110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.214276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.214300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.214391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.208946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.209422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.209882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.210081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.209651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.210622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.210711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.210890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.210937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.211133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.211001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.211072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.211441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.211586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.491999 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.209470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209653 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.209695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.209850 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.209953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.211068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.211086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.211576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.211595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.213022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.213113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.213153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.213273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.213316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.213566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.213599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.213985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214913 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.208619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.208704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.208949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.208979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.209090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.209411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209916 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208381 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.208574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.208753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210015 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.211018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.211299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.211475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.211904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.211947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.212264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.212583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.213096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.213510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.213741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.214331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.214340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.214650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.215030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.215385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.215640 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.215920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.216227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.216397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.216446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.764365 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.764412 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.764428 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.764459 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.764483 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:53Z","lastTransitionTime":"2025-08-13T19:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.783246 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.791630 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.792179 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.792451 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.792580 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.792731 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:53Z","lastTransitionTime":"2025-08-13T19:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.811048 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.817735 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.817922 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.818121 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.818278 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.818402 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:53Z","lastTransitionTime":"2025-08-13T19:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.849442 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.857971 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.858316 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.858459 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.858629 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.858764 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:53Z","lastTransitionTime":"2025-08-13T19:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.887074 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.894340 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.894603 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.894719 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.894963 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.895304 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:53Z","lastTransitionTime":"2025-08-13T19:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.913122 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.913189 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.208462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.208648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.208860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.209055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.209139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.209208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.209280 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.209355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.209644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.209996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.210073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.210484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.210768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.675270 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.675642 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.675762 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.675990 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.676105 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.208515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.208609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.209108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.209336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.208569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.209588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.209684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.209867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211685 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211844 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.212048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.212731 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.213583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213873 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214260 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.227942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.244287 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.264242 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.285094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.301999 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.318688 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.335131 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.360851 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.377755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.397198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.422138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.441441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.458117 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.479031 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.493683 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.501282 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.522960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.548171 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.562761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.577018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.593049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.614034 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.634136 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.651295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.668755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.685663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.703587 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.718571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.736345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.752665 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.767003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.781137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.797148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.815248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.836444 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.852126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.868337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.883350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.902739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.922450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.937512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.953555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.968726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.987218 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.003406 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.019021 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.035238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.055083 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.067732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.095583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.113874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.137709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.162697 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.185519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.205393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.208690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.208712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.208734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.208690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.208758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.208966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.209288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.209302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.209432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.209506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.209649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.210274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.210563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.228200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.245220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.266423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.285124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.305767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.322163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.340535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.355586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.370258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.385888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.408444 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.426946 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.442190 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.209410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.209693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209859 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.209863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212458 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212844 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.213038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.213093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.213515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.213648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.209265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209863 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.210503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.210710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.210889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.210967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.211052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.211117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208589 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.210029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.210106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.210214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.210222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.210252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.210351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.210647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.210707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.210908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.211085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.211607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.211757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.211918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.211964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.212352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.212431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.212448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.213395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.213892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214847 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214862 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.208527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.208869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.209116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.209259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.209591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.209762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.209294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.210318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.210575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.209769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.211118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.211142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.211335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.495437 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.208692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.208744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.208878 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.209064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.209239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.209404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.209636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.209891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.210030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.210295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.210425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.210489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.210575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.210678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.210726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.210884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.210957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.212096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.212164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.212235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.212336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.212413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.212571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.212972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.213067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.213125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.213151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.213236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.213604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.214068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.214182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.214306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.214538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.209548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.209601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.209704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.209934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.210134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.210238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.210332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.208313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.208452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.208576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.208693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.208864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.209004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.209066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.209206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.209272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.209332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.209708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.209872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.210031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.210250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.210476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.210652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210843 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.210917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.212371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.212525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.212542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.212635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.213653 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.214028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.214157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.214282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.214389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.199875 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.199970 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.199992 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.200018 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.200053 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:04Z","lastTransitionTime":"2025-08-13T19:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.209100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.209204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.209734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.210086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.210765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.211455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.211488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.211601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.211717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.212487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.212649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.215692 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.220665 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.220712 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.220727 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.220747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.220843 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:04Z","lastTransitionTime":"2025-08-13T19:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.235272 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.239285 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.239362 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.239383 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.239408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.239446 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:04Z","lastTransitionTime":"2025-08-13T19:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.253328 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.258665 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.258733 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.258752 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.259075 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.259163 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:04Z","lastTransitionTime":"2025-08-13T19:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.276034 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.282154 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.282206 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.282220 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.282242 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.282262 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:04Z","lastTransitionTime":"2025-08-13T19:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.297033 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.297166 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.209628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.209693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.209933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208774 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.211113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.211211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208878 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.211302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.211396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.212037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.212154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.212297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.212439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.212734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.214052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.214153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.214285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.214411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.214708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.215193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.215435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.215595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.215953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.216122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.216274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.216429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.216746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.217234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.217976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.229952 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.247903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.264720 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.282737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.300145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.320640 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.338568 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.356176 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.373593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.387611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.403395 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.419729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.442408 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.461205 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.480482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.495992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.497511 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.512671 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.528541 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.551161 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.565525 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.582667 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.597932 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.616593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.632259 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.649354 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.664988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.691337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.711227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.730971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.750932 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.770151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.791679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.815346 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.836715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.854622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.875412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.902980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.920983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.943894 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.963982 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.986722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.009168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.025738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.049182 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.076325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.098116 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.115644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.136405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.158495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.178972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.200008 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.208540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.208625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.208858 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.208883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.208571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.209116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.209544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.209932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.210166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.210300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.210400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.210533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.210633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.220044 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.240662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.259263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.278110 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.295642 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.315056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.332747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.353472 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.369451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.385669 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.403059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.419689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.438641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.460732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.480764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.500187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.209506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.209660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.209744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.209914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210815 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.211029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.211374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.211499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212880 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.213512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.214270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.214317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.214337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.214346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.214360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.208690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.208746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.209504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.209013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.209109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.209188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.209229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.209902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.209956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.210042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.210117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.210183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.210639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.211077 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.212000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.209552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.209715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.209918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.212088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.212602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.212630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.212966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.215061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.215758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.216261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.143478 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.143573 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.208542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.208705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.208895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.209115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.209187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.209372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209998 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.499170 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.209471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.209618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.209699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.209896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209941 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.213054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.214027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.214132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.214234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.214324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.214483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.208911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.209889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.210193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.209705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.210459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.210625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.210882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.211068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.209383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.209613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.209710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.209746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.209904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.210018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.210133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.210338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.210471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.210666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.211101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.211102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.211271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.211556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.212099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.212105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.212890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.212946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.214055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.208320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.208496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.208653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.208949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.208353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.208449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.209129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.209342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.209522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.209757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.210114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.210259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.210386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.512190 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.512226 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.512237 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.512278 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.512299 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:14Z","lastTransitionTime":"2025-08-13T19:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.526050 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.531306 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.531393 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.531414 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.531435 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.531464 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:14Z","lastTransitionTime":"2025-08-13T19:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.545560 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.550937 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.551008 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.551025 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.551047 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.551074 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:14Z","lastTransitionTime":"2025-08-13T19:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.564534 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.568959 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.569035 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.569052 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.569073 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.569093 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:14Z","lastTransitionTime":"2025-08-13T19:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.588623 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.594962 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.595040 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.595057 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.595078 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.595100 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:14Z","lastTransitionTime":"2025-08-13T19:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.608550 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.608622 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.208309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.208339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.208347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.208417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.209720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210843 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.212281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.212358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.212593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213444 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.215141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.215228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.215170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.215373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.227187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.243581 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.259957 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.277879 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.294626 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.311998 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.334503 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.350299 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.log" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.351259 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/3.log" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.351508 4183 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" exitCode=1 Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.351608 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f"} Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.352652 4183 scope.go:117] "RemoveContainer" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.352885 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.353509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.357679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.378410 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.399024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.414248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.434201 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.449765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.464647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.489757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.500460 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.506362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.524239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.541269 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.557247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.572415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.588387 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.604962 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.620897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.638676 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.655024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.670020 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.685454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.699234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.721493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.737484 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.760495 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.780416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.801081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.816886 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.836202 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.858273 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.875002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.894663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.926236 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.945697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.969096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.003642 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.020976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.047977 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.068544 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.093756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.108855 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.124349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.149236 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.165761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.184978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.202232 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.209884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.211103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.211474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.211626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.211754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.212072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.222499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.239376 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.269516 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.286118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.304628 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.320653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.339570 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.356069 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.log" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.364234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.382370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.399208 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.415684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.432113 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.445471 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.462009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.483643 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.503598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.522071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.539124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.558173 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.576366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.593889 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.611320 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.631165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.665153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.698930 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.715747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.734008 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.751758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.771091 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.788994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.806566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.836602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.854684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.878263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.904117 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.919747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.940516 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.959600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.978959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.993682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.008120 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.033367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.047860 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.063536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.105051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.143381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.180872 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.208575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.208942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209320 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.211011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.211059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.211086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.211113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.211175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.211227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.212143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.212508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.213090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.213642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.213764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.213879 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.214385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.215172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.215868 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.216106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.216240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.216711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.217499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.217851 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.218234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.218331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.218443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.218528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.219003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.219961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.228751 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.261428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.302559 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.343270 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.364224 4183 generic.go:334] "Generic (PLEG): container finished" podID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerID="4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02" exitCode=0 Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.364294 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerDied","Data":"4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02"} Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.364331 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac"} Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.364361 4183 scope.go:117] "RemoveContainer" containerID="0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.383378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.421610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.429562 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.433483 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.433580 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.460251 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.504343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.549174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.580721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.625069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.662510 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.702944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.744126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.783212 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.819603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.860931 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.903214 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.940803 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.982282 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.026320 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.063229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.101991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.143582 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.183959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.208994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.209353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.209714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.210182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.210324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.210442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.210588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.210240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.226485 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.269575 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.320088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.352706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.389781 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.423420 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.432088 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.432205 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.463661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.499939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.543935 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.582775 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.623942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.663709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.702396 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.742640 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.780916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.825602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.872737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.903669 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.944095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.981653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.026010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.062359 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.101305 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.144546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.181393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.209576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.209690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209442 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.209937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.210031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.210216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.210308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.210582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.210762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.211028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.211133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.211336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.211402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.211517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.211574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.211688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.211747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.211909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.212040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.212237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.212590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.212875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213423 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215798 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.216113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.216204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.216258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.219556 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.264480 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.302417 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.342617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.384648 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.424299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.432424 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.432912 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.462156 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.505269 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.545297 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.673785 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.706049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.727940 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.752038 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.769963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.832900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.849550 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.876249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.906455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.943305 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.983367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.023068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.063089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.100118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.141356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.181972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.210152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.210221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.210314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.210465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.210492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.210725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.210955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.211163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.211619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.211989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.213946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.214105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.213966 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.215044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.227465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.265598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.339301 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.354486 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.386384 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.430137 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.432887 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.432975 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.464674 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.496082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.502134 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.519051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.546187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.593213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.624098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.660919 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.699445 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.744033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.784885 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.821728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.862049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.900037 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.941619 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.985322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.023563 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.064307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.106073 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.144427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.183199 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.208880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.208930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.208951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.210387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.210530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.210643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.210886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211780 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.212060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.212149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.212283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.212648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.212746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213862 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.214056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.214119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.214198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.224279 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.262956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.432690 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.432951 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.210093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.210358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.210552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.210727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.210995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.211166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.211317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.432667 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.432776 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.208984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.209470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.209362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.209420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.209751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.211074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.211206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.211271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.212397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.212588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212423 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.212659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.212755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.212933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.213048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.213068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.213293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.213635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.213728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213784 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214474 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.216007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.216264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.216350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.216570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.432949 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.433115 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.208208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.208442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.208593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.208648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.208743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.208760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.208939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.209299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.209410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.209491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.209532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.209679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.209977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.433884 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.434077 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.700048 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.700153 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.700178 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.700209 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.700251 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:24Z","lastTransitionTime":"2025-08-13T19:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.716426 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.723122 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.723215 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.723295 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.723329 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.723370 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:24Z","lastTransitionTime":"2025-08-13T19:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.739930 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.745076 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.745146 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.745162 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.745185 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.745232 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:24Z","lastTransitionTime":"2025-08-13T19:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.759840 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.765005 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.765054 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.765075 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.765100 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.765126 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:24Z","lastTransitionTime":"2025-08-13T19:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.779063 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.784634 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.784675 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.784688 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.784710 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.784737 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:24Z","lastTransitionTime":"2025-08-13T19:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.798616 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.798684 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.208307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.208358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.208376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.208335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.208952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.210143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.210419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.210662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.211001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.211239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.211467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.212484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.212493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.212601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.212662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.212669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.212786 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.214043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.214104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.214197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.214238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.214321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.215126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.215183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.215240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.215298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.215348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.230450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.247405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.268587 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.286304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.303362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.317926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.335582 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.350646 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.373454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.390222 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.408072 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.425205 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.431509 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.431603 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.441396 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.455680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.473899 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.491360 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.503760 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.509424 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.533014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.548674 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.564661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.580489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.601151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.623561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.641068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.656056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.673617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.690458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.701915 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.714892 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.732566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.751595 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.768521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.785310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.802176 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.818491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.838598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.854870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.882463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.902023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.920979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.938464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.955760 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.973037 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.998760 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.018333 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.035385 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.050514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.065416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.100773 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.155977 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.193292 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.211758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.231006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.248508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.277241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.296764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.311566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.326400 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.354032 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.379063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.403086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.429143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.432332 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.432446 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.450079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.465152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.483462 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.501607 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.519010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.209467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.209579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.209763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210782 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211888 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.213012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.213218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.213413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.213651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.214315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.214386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.215184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.215409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.215964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.216180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.216712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.217522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.224625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.225390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.225536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.434736 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.435371 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.208964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.209354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.209494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.209660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.209771 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.209946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.210054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.210134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.210573 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.211131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.233990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.260299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.357336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.418047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.433269 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.433428 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.445871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.466549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.521455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.542326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.561741 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.593118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.612964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.634608 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.654884 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.674200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.695472 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.712581 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.728469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.746766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.769183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.791487 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.809622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.832051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.852544 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.892347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.916869 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.935146 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.956539 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.980600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.025381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.044215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.063505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.081120 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.114123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.135507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.151580 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.169593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.182709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.200942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.209359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.209492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.209639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.209749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210278 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210513 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.212133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.212289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.212495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.212527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.212545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.213036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.213481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.213661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.223285 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.239358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.254769 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.272688 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.289248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.317186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.336041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.358950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.388992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.439765 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.439940 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.565093 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.598019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.617523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.646586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.693340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.718939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.737067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.755028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.774198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.794979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.811977 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.831254 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.854229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.877495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.900155 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.924133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.952726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.971984 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.996294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.021090 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.209411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.209755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.209983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.210098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.210564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.211050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.211274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.211448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.211516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.211585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.211650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.211763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.433915 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.434596 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.509667 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.209217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.209490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.209868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.212248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212444 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.212490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.212543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.212988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213776 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.214991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.215155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.433423 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.433535 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.210362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.210452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.210571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.210691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.210896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.211035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.211216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.211417 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.212166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.433986 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.434187 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.208707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.208883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210878 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.211097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.211979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.212003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.212124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.212272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.212426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.434643 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.434749 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.208437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.208597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.208744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.208749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.209075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.209097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.209496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.432745 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.432945 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.173154 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.173230 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.173258 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.173282 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.173312 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:35Z","lastTransitionTime":"2025-08-13T19:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.190060 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.195649 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.195729 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.195747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.195769 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.195884 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:35Z","lastTransitionTime":"2025-08-13T19:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.208519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.208765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.209318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.209410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.209604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.209734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.209941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.211069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.211368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.211716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.212143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212260 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.212433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.212572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.212688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.213037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.214096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.214223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.214317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.218438 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.225371 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.227084 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.227234 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.227378 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.227629 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:35Z","lastTransitionTime":"2025-08-13T19:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.231321 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.255693 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.255997 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.263157 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.263280 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.263408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.263670 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.263763 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:35Z","lastTransitionTime":"2025-08-13T19:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.281267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.285061 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.291047 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.291150 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.291174 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.291199 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.291235 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:35Z","lastTransitionTime":"2025-08-13T19:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.303167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.305852 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.306098 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.319336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.335442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.350897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.368024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.390210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.407238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.427848 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.432689 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.432882 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.444910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.463133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.479682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.498515 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.511479 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.515038 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.533761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.551324 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.569990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.588562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.616902 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.631852 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.647608 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.663514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.679297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.704639 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.723493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.739076 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.756040 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.771347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.787684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.803370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.824609 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.842581 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.860221 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.884089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.914013 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.931507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.946445 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.965138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.986679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.004674 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.021088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.042220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.127137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.148172 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.169532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.190295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.208705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.209126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.208990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.209216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209845 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.209949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.210115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.213053 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.232737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.250069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.266667 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.290307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.312315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.335611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.355864 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.374987 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.397634 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.415379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.432980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.433615 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.433816 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.451944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.468602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.489233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.506230 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.524348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.541674 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.558081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.208578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.208713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.208726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.209397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.209548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.209856 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.210067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209863 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.210733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.211083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.211101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211458 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.212058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.212338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.212530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.212720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.213004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.213187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.213481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.213632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.213633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.217047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.217455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.217565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.217632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.217693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.432029 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.432528 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.208886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.208953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.208992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.209086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.209343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.210472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.210999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.432474 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.432591 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.210132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.210683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.211141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.211470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.211492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.211603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.211711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.211869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.212020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.212148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.212478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.212552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.212732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.212864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.213703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.213737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.213982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.214081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.214120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.214250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.214320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.214404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.214721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.214755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.214957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.215412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215855 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.216182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.217018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.431916 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.432043 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.143853 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.143985 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.208260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.208520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.209201 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.209738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.210220 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.210392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.210744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.210897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.210975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.211098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.431658 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.431818 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.513339 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.208971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.209536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.209818 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209862 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.209950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.210343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.210510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.210575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.210695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.211447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.211955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.212560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.213004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.213060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.213136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.213189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.213403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.213653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.432079 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.432175 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.209321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.210253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.211230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.211280 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.211576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.212181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.212232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.433717 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.433924 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.209475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.209681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.210766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.211013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.210981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.211273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.211541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.211585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.213041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.213520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.214048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.214414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214850 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214880 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.216009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.433489 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.433667 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.209893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.210182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.210299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.210403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.210501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.210576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.210735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.211042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.211324 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.211923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.437341 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.437424 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209442 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.209479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.209758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.210246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.210469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.210611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.210733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210870 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211848 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.212134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.212249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.212356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.212429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.212515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.213146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.213213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.213242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.213339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.214247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.214479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214816 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.236102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.256173 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.277871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.299117 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.318504 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.337392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.357907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.377195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.396886 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.412724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.430289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.435076 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.435208 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.446585 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.464267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.484097 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509656 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509724 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509744 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509768 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509888 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:45Z","lastTransitionTime":"2025-08-13T19:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.514750 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.525106 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.531511 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.531565 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.531580 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.531602 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.531631 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:45Z","lastTransitionTime":"2025-08-13T19:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.532275 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.547266 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.549638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.553289 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.553351 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.553370 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.553390 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.553415 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:45Z","lastTransitionTime":"2025-08-13T19:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.565862 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.567330 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.572534 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.572578 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.572592 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.572610 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.572631 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:45Z","lastTransitionTime":"2025-08-13T19:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.582486 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.588900 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.594104 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.594190 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.594209 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.594230 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.594260 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:45Z","lastTransitionTime":"2025-08-13T19:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.602856 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.609613 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.609669 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.620545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.637701 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.654989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.679207 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.707893 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.722575 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.737966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.753518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.780421 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.802770 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.819528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.838022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.855218 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.876594 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.892557 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.909627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.928076 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.946730 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.965170 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.984230 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.002926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.020187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.034956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.049617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.067022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.084043 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.099638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.113445 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.128618 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.143514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.164310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.182140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.201018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.209268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.209292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.209430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.209491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.209920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.210062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.210243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.210354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.210480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.210588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.210704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.210864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.220238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.238069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.254369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.269416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.285891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.303135 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.317994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.333931 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.352186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.374635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.391973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.404684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.422394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.434053 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.434252 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.438904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.209287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.209445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.209676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.209932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210438 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.212167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.212303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.212362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.212635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.213534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.213665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214369 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.432962 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.433095 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.208561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.432899 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.433067 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.209365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.209082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.209095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.209624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.209112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.209974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.210198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.210736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.210872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.214022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.432293 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.432456 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.208475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.208852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.209066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.209154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.209366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.209590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.209706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.209921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.210008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.210140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.210219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.210324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.210432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.432382 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.432541 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.516577 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.209447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.209550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.209692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209870 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.209885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.212041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212320 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.212588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.212705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.212897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.213086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213856 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.214006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.214143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.214262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.432427 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.432549 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.006655 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.006914 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007056 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007111 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007161 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007227 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007253 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007285 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007346 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007333 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007415 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007349852 +0000 UTC m=+900.700014910 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007452 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007494 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007472935 +0000 UTC m=+900.700137893 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007555 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007558 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007600 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007573378 +0000 UTC m=+900.700238106 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007603 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007616919 +0000 UTC m=+900.700281577 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007564 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007479 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007673 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.00764336 +0000 UTC m=+900.700308568 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007679 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007694 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007726 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007697712 +0000 UTC m=+900.700363000 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007744 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007755 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007741973 +0000 UTC m=+900.700406601 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007848 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007768434 +0000 UTC m=+900.700433082 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007890 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007766 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008004 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.00798148 +0000 UTC m=+900.700646438 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008009 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008135 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.008195 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008199 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008180835 +0000 UTC m=+900.700845883 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008264 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008235867 +0000 UTC m=+900.700900535 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.008297 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.008343 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008349 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.008400 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008451 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008478 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008513 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008520 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008510885 +0000 UTC m=+900.701175623 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008604 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008587467 +0000 UTC m=+900.701252145 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008618998 +0000 UTC m=+900.701283796 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008650 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008640089 +0000 UTC m=+900.701304707 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008978 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009009 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009084 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.009069591 +0000 UTC m=+900.701734299 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.009116 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.009228 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.009275 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009394 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009470 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.009448982 +0000 UTC m=+900.702114050 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009500 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009550 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.009539094 +0000 UTC m=+900.702203912 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.009604 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009649 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009690 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.009681438 +0000 UTC m=+900.702346176 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.009719 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010074 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.010146 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010186 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010163532 +0000 UTC m=+900.702828450 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.010283 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010190 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.010353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010370 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010354507 +0000 UTC m=+900.703019306 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.010411 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010436 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.010466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010479 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010466141 +0000 UTC m=+900.703130849 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010543 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010618 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010680 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010725 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010714708 +0000 UTC m=+900.703379396 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010750 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010739388 +0000 UTC m=+900.703404176 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010859 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010760249 +0000 UTC m=+900.703424927 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.112476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.112704 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.113506 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.113475002 +0000 UTC m=+900.806139800 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.113656 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.113752 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.114054 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.113885 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.114172 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.114149981 +0000 UTC m=+900.806814759 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.114216 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.114200723 +0000 UTC m=+900.806865711 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.114636 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.114752 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.114736638 +0000 UTC m=+900.807401446 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.115068 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.115412 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.115522 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.11550687 +0000 UTC m=+900.808171738 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.115551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.115613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.115852 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.115913 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.115903241 +0000 UTC m=+900.808567859 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.115931 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.116064 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.116148 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.116127588 +0000 UTC m=+900.808792386 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.117854 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.117958 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.118043 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.118027002 +0000 UTC m=+900.810691730 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.118196 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.118255 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.118321 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.11830789 +0000 UTC m=+900.810972508 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.208951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.209183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.209470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.209560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.209639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.209749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.210173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.219661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.219908 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.219932 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.219947 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220003 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.219985583 +0000 UTC m=+900.912650341 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220048 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220073 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220133 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.220115147 +0000 UTC m=+900.912779965 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.219931 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.220360 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220477 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.220504 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220530 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.220517248 +0000 UTC m=+900.913182066 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.220584 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220661 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.220674 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220689 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220732 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.220716864 +0000 UTC m=+900.913381672 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.220869 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220884 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.220868368 +0000 UTC m=+900.913533096 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220929 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220968 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.220958481 +0000 UTC m=+900.913623099 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220989 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221013 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221063 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221100 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221070114 +0000 UTC m=+900.913735362 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221111 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221135 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221157 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221148166 +0000 UTC m=+900.913812784 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221178 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221166597 +0000 UTC m=+900.913831405 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221181 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221274 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221314 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221333 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221344 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221359 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221377 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221366163 +0000 UTC m=+900.914030831 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221432 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221445 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221482 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221471486 +0000 UTC m=+900.914136104 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221519 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221563 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221554898 +0000 UTC m=+900.914219516 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221565 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221523 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221675 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221649361 +0000 UTC m=+900.914314619 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221708 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221742 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221734153 +0000 UTC m=+900.914398781 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221857 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221893 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221881467 +0000 UTC m=+900.914546075 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222109 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222211 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222279 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222294 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222324 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.22231453 +0000 UTC m=+900.914979268 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222386 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222425 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.222416213 +0000 UTC m=+900.915080961 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222457 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222485 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222513 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222539 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222549 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222571 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222590 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222609 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222639 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.222612898 +0000 UTC m=+900.915278156 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222678 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222708 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.222699821 +0000 UTC m=+900.915364439 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222711 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222720 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222744 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222645 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222680 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222876 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222923 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222936 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222759 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.222748762 +0000 UTC m=+900.915413510 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223003 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223036 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223005269 +0000 UTC m=+900.915670557 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223056 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223095 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223085082 +0000 UTC m=+900.915749840 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223119 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223108992 +0000 UTC m=+900.915773820 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223168 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223156104 +0000 UTC m=+900.915820872 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223203 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223197175 +0000 UTC m=+900.915861773 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223257 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223289 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223435 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223635 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223702 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223729 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223730 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223822 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223765551 +0000 UTC m=+900.916430189 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223891 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223922 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223928 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223919406 +0000 UTC m=+900.916584224 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223926 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223952 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223956 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223943306 +0000 UTC m=+900.916608074 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223994 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224006 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224012 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224018 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224032 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224036 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223764 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224063 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224054189 +0000 UTC m=+900.916718937 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224082 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.22407219 +0000 UTC m=+900.916736778 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224096 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224136 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224126661 +0000 UTC m=+900.916791470 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224050 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224155 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224146232 +0000 UTC m=+900.916811110 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223962 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224305 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224351 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224334437 +0000 UTC m=+900.916999055 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223673 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224402 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224421 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224454 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224492 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224465081 +0000 UTC m=+900.917130289 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224510 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224354 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224567 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224555324 +0000 UTC m=+900.917220292 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224588 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224594 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224613 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224602 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224646 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224619 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224665 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224648936 +0000 UTC m=+900.917313854 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224751 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224978 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225024 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225023 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225121 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225153 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225171 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225182 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225064 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225088 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.225078049 +0000 UTC m=+900.917742637 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225340 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.225257594 +0000 UTC m=+900.917922212 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225361 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.225353616 +0000 UTC m=+900.918018505 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225376 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.225370147 +0000 UTC m=+900.918034735 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225491 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.2254799 +0000 UTC m=+900.918144508 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225638 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225728 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225763 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225857 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.225761968 +0000 UTC m=+900.918426596 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225951 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225988 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.226016 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.226051 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226095 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226084517 +0000 UTC m=+900.918749135 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226147 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226161 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226171 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226202 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.22619228 +0000 UTC m=+900.918857029 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.226228 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.226271 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.226298 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226321 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226328 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226402 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226383836 +0000 UTC m=+900.919048754 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226407 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226427 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226437 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226445 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226454 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226317 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226484 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226475289 +0000 UTC m=+900.919140117 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226502 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226520 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226525 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226539 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.22652795 +0000 UTC m=+900.919192698 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226565 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226576 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226559291 +0000 UTC m=+900.919224159 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226403 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226579 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226650 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226622883 +0000 UTC m=+900.919287951 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226654 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226725 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226708335 +0000 UTC m=+900.919373353 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226963 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226949922 +0000 UTC m=+900.919614530 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227122 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227311 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227362 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227387 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227412 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227439 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227481 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227504 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227528 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227554 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227582 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227609 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227638 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227671 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.227748 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.227855 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.227771016 +0000 UTC m=+900.920435644 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.227905 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.227931 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.22792461 +0000 UTC m=+900.920589218 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.227981 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228004 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.227997002 +0000 UTC m=+900.920661710 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228036 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228059 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228053564 +0000 UTC m=+900.920718172 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228090 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228112 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228105755 +0000 UTC m=+900.920770363 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228141 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228165 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228157837 +0000 UTC m=+900.920822455 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228194 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228215 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228209228 +0000 UTC m=+900.920873836 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228261 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228273 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228281 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228305 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228298471 +0000 UTC m=+900.920963289 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228334 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228355 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228349052 +0000 UTC m=+900.921013660 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228391 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228412 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228406364 +0000 UTC m=+900.921070982 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228453 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228465 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228472 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228498 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228491986 +0000 UTC m=+900.921156594 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228539 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228549 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228556 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228581 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228574319 +0000 UTC m=+900.921239027 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228626 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228637 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228644 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228671 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228659891 +0000 UTC m=+900.921324499 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228711 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228735 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228728573 +0000 UTC m=+900.921393191 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329202 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329264 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329278 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329357 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.329338005 +0000 UTC m=+901.022002744 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.329541 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329903 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329981 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.329966463 +0000 UTC m=+901.022631191 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.329692 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.330253 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.330334 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.330466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.330547 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330543 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330599 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330613 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330668 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330686 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.330664603 +0000 UTC m=+901.023329401 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330705 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.330697124 +0000 UTC m=+901.023361752 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330710 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330751 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330751 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.330579 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331014 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330763 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331114 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331128 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331137 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331154 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330876 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330894 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331167 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331156437 +0000 UTC m=+901.023821265 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331303 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331348 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331325322 +0000 UTC m=+901.023990270 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331407 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331484 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331492 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331506 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331517 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331549 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331522768 +0000 UTC m=+901.024187686 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331592 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331574929 +0000 UTC m=+901.024239927 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331618 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331624 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.33160964 +0000 UTC m=+901.024274538 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331647 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331665 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331690 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331728 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331710023 +0000 UTC m=+901.024374961 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331766 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331853 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331866 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331902 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331892878 +0000 UTC m=+901.024557616 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331943 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.332013 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.332039 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332094 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332094 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.332066413 +0000 UTC m=+901.024731411 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332108 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332148 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332146 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332299 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332337 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.332326781 +0000 UTC m=+901.024991529 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.332336 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332422 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.332407643 +0000 UTC m=+901.025072371 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332426 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332462 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332476 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.332467955 +0000 UTC m=+901.025132543 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332480 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332538 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.332520526 +0000 UTC m=+901.025185474 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.332915 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.332967 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.333046 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333088 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333131 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333138 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.333126874 +0000 UTC m=+901.025791612 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333204 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.333187085 +0000 UTC m=+901.025851913 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333382 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.333470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333483 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.333459553 +0000 UTC m=+901.026124201 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333608 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333714 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.33369312 +0000 UTC m=+901.026358068 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334037 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334103 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334178 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334213 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334231 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334268 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334313 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.334293467 +0000 UTC m=+901.026958395 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334331 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334378 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.334367869 +0000 UTC m=+901.027032467 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334391 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334420 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334438 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334482 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334497 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.334480572 +0000 UTC m=+901.027145610 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334565 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334600 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334609 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.334597986 +0000 UTC m=+901.027262704 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334649 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334673 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.334666598 +0000 UTC m=+901.027331216 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334674 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334720 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334731 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334741 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334850 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.33475951 +0000 UTC m=+901.027424138 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334731 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334983 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335040 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335058 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.335035338 +0000 UTC m=+901.027700366 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335094 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335111 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335123 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.33511498 +0000 UTC m=+901.027779708 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335196 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335279 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335282 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335347 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.335330287 +0000 UTC m=+901.027995095 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335423 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335453 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335455 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335482 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335501 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335470 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.33546074 +0000 UTC m=+901.028125468 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335467 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335430 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335645 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.335632105 +0000 UTC m=+901.028296823 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335724 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.335704677 +0000 UTC m=+901.028378065 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335768 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.335758249 +0000 UTC m=+901.028422837 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335945 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336215 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336244 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336259 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336295 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.336286404 +0000 UTC m=+901.028951022 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336329 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336347 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336355 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336386 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.336377506 +0000 UTC m=+901.029042124 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.336499 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.336535 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.336575 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336757 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336878 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336890 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336951 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.336934972 +0000 UTC m=+901.029599590 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.337359 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.337347574 +0000 UTC m=+901.030012222 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.433241 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.433378 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.438884 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439174 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439253 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439270 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439269 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439297 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439367 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.439346057 +0000 UTC m=+901.132010795 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.439384 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439395 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.439385318 +0000 UTC m=+901.132050036 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.439906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.440236 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.440307 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.440325 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.440416 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.440395007 +0000 UTC m=+901.133059735 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.212448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.212578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.212728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.212906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.212960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.212994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.213138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.213430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.213590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.213708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.213911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.214076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.214211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.214422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.214531 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.214698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214865 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.216028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.216043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.216098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.216123 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.216191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.216267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.216421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.217017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218841 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.219157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.219577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.219909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.433185 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.433295 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.208760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.209229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.209315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.209532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.209375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.209665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.209987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.210046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.210082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.210203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.210292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.210473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.433286 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.433513 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.677470 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.677664 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.677901 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.677967 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.678012 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.221052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.221172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.221507 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.223310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.223501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.223593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.223762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.224019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.228194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.228357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.228497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.229344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.230307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.230538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.230699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.230851 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.230946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.230987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.231295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.231555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.231754 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.231857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.232654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.232762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.234073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.256464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.308340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.338906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.367109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.394532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.417519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.432527 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.432662 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.439166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.462444 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.483103 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.503670 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.518104 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.522390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.542206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.572258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.589877 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.607284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.625441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.648886 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.670923 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.689145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.708963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.727056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.741697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.757192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.776238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.793569 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.819525 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.841751 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.850661 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.850729 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.850743 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.850767 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.850873 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:55Z","lastTransitionTime":"2025-08-13T19:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.860895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.867695 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.874961 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.875035 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.875089 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.875110 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.875188 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:55Z","lastTransitionTime":"2025-08-13T19:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.883106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.892060 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.899521 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.899612 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.899633 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.899727 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.899765 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:55Z","lastTransitionTime":"2025-08-13T19:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.911897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.921211 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.928921 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.929036 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.929293 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.929323 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.929350 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:55Z","lastTransitionTime":"2025-08-13T19:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.931659 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.949509 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.955321 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.955640 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.955549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.956297 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.956380 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.956414 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:55Z","lastTransitionTime":"2025-08-13T19:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.976862 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.976953 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.982193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.002440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.022208 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.041617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.062279 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.079343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.095441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.112237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.131963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.147240 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.169278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.186660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.203240 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.211380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.211598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.211648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.211895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.212091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.212236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.212270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.212102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.211896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.212160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.212641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.212857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.212997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.213071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.215348 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.216416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.221740 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.240602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.258093 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.276651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.295134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.316546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.337316 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.361370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.382947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.397817 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.418685 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.432578 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.432737 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.439396 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.458045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.476726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.492223 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.511080 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.531878 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.550300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.570007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.585535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.602639 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.619589 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208858 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.208965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.209107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.209246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.209394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212685 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.213244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.433093 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.433217 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.209191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.209528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.211262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.210269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.211273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.210469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.212025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.212086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.432538 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.433036 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211939 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.213310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213916 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.216074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.217427 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.433400 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.433499 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.208202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.208525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.208732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.208967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.209250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.209564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.209666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.209911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.210030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.210239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.210331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.434039 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.434164 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.520077 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.209466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.209658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.209964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211442 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.212176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.212309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.212394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.212595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.212937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.213306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.213475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.214222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.214417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.214642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.432910 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.433091 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.209747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.210257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209278 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.210019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.210957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.211223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.211353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.211538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.211612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.433625 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.433761 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208870 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.432538 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.432657 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.208526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.208597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.208558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.208684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.208724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.208961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.209129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.209255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.209338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.209418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.209670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.210439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.432401 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.432498 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.208644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.208742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.208942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.208865 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.208996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.209139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.209390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209875 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.209877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210846 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.211067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.212503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.212606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.212764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.212966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.213179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.213297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.214097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.214333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.214731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.215212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.215432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.215909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.216060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.216225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.216304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.233012 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.251386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.276211 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.308609 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.328322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.346304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.370438 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.389476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.411373 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.426415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.431598 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.431712 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.444541 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.462377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.480582 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.497925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.514520 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.521723 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.532124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.551449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.572145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.588510 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.608414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.624842 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.645997 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.663466 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.683937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.704973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.726118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.747703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.768918 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.791175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.808263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.825492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.849322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.866211 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.884022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.901126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.920748 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.937371 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.954083 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.973215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.991405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.011450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.027856 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.049470 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.068168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.090284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.106888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.127328 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.148762 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.167735 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.187452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.205447 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.208991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209513 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.210098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.265709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.299008 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.299370 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.299521 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.299623 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.299722 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:06Z","lastTransitionTime":"2025-08-13T19:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.314215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.338715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.339053 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.345370 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.345449 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.345467 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.345485 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.345505 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:06Z","lastTransitionTime":"2025-08-13T19:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.364440 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.376755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.378718 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.378909 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.378934 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.378961 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.378999 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:06Z","lastTransitionTime":"2025-08-13T19:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.400566 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.406093 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.406175 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.406200 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.406227 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.406252 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:06Z","lastTransitionTime":"2025-08-13T19:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.411665 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.422309 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.426488 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.426589 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.426612 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.426641 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.426678 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:06Z","lastTransitionTime":"2025-08-13T19:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.432381 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.432490 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.434738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.443050 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.443135 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.450719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.468276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.486451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.504126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.523291 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.542000 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.562857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.577177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.599599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.616132 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.208543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.208625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.208864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.209288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.208573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.209441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.209588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.209698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.209882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.212763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.212894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.213966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.213995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.215572 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.216091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.216154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.432008 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.432121 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.208694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.208753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.208878 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.208923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.209352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.209601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.209870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.209963 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.210088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.210231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.210331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.210454 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.212519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.432324 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.432413 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.209460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.209616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.209683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.209889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.212100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.212101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.212191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.212222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.212262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.212317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.212350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.212372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.213296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.213597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.214177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.214390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.214460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.214576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.214743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.432312 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.432422 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.144195 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.144311 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.144370 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.145382 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.145608 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665" gracePeriod=600 Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.209643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.209696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.210553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.210569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.210579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.210599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.212723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.433145 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.433281 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.524136 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.577107 4183 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665" exitCode=0 Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.577246 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665"} Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.577516 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"afce55cdf18c49434707644f949a34b08fce40dba18e4191658cbc7d2bfeb9fc"} Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.577545 4183 scope.go:117] "RemoveContainer" containerID="9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.601156 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.620676 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.638035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.653861 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.672057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.689895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.708523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.727279 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.741845 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.757649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afce55cdf18c49434707644f949a34b08fce40dba18e4191658cbc7d2bfeb9fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:57:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:57:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.777734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.798631 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.817019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.833558 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.849947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.893332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.910649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.936523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.957273 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.976704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.998571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.020229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.038163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.068755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.087412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.103659 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.127081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.145394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.164353 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.183260 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.199646 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.208908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.208931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.208968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.208997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.209431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.209548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.209637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.209944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.209981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.212384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.213097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.214165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.214299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.214404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.214544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.224234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.243038 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.258305 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.274488 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.289129 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.309220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.326294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.345580 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.365715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.383415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.407496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.424740 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.431965 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.432058 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.444002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.465876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.482488 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.500487 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.518607 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.535174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.559292 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.579424 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.600041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.614910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.631988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.649929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.667192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.684139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.702689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.716183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.736082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.781503 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.809968 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.833429 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.861241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.891271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.913689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.934654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.210033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.210348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.210842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.210974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.211319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.211353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.211439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.435671 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.435765 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.209424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.209707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.209770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.209951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.211258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.211338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.211503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.211957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.212128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.212147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.212384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.212479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.213109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.213171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.213257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.213696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.213811 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.215003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.434236 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.434379 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.209629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.209731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.210077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.210153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.210550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.210923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.210934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.433529 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.433636 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.208876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.208923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.208927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.208876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.211046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.211125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.211211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.211233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.211284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.211313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.211324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.211386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.211724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.211886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.212881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.212503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.213897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.214205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.432269 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.432388 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.526372 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.208474 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.209251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.209427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.209481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.209489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.209508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.209884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.210024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.210102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.210581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.211386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.211696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.434755 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.434974 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.718121 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.718647 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.718858 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.719002 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.719105 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:16Z","lastTransitionTime":"2025-08-13T19:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.099408 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.125052 4183 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.208462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.208649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208852 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209851 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.210036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.210466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.211172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.211173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.211363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.211478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.211660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.212058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212819 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.213281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.213437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.213904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.214153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.432718 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.432908 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.209078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.209120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.209370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.209581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.209663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.210034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.210181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.210261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.210406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.210613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.211024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.211224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.321547 4183 csr.go:261] certificate signing request csr-6mdrh is approved, waiting to be issued Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.338156 4183 csr.go:257] certificate signing request csr-6mdrh is issued Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.432251 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.432335 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.209688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.209693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.211661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.211916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.211860 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213902 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.214264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.214516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.214628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.216160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.216238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.216325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.216393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.216453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.340423 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-04-29 11:41:58.636711427 +0000 UTC Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.340502 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6207h44m39.296215398s for next certificate rotation Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.432000 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.432079 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.208455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.208853 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.209088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.209252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.209361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.209438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.209501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.209721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.211693 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.212273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.212492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.221584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.221890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.222101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.222298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.341232 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-04-29 00:37:29.51445257 +0000 UTC Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.341283 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6196h40m9.173174313s for next certificate rotation Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.435956 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.436048 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.528200 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208423 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.210300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.210540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.210648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.210950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.211163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.213307 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.213378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.213502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.213613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.213737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.213936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.214619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216858 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.432040 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.432151 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.611965 4183 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"crc\": StorageError: invalid object, Code: 4, Key: /kubernetes.io/leases/kube-node-lease/crc, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 705b8cea-b0fa-4d4c-9420-d8b3e9b05fb1, UID in object meta: " Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.209301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.209562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.209563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.209624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.209639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.209729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.209904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.210017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.210109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.210181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.210262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.210346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.210425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.433563 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.433664 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.208546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.208607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.208562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.208753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.209517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.209927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.210009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.210243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.210499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.210746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.211047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.211264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.211499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.211649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.211820 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213009 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.214586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.215309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.431727 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.431938 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.208706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.208819 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.208707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.209214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.209235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.431766 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.431938 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.208689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.208889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.208720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.208745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.211114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.211266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.211414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.211650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.212166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.212222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.212286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.212376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.212444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.212534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.212663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.212856 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.212935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.213287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.213442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.213526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.213745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.214036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.214154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.214692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214874 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.215366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.215640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.215762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.215275 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.216329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.216405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.217454 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.217497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217646 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.218089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.218266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.218411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.218723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.218938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.219016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.219118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.219277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.433163 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.433272 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.530038 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.209602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.209687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.209906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.210019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.209250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.210096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.210370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.434189 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.434368 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.208927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.208990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.209214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.208937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.208959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.209361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.209467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.209550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.209760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210822 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210847 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.433656 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.433849 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.212429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.212555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.212664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.212859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.212955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.213017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.213130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.432307 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.432407 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.209210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.209442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.209659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.209740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.209901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.209959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.210099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.210309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.210455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.210690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211902 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213365 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213819 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.214137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.433632 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.433961 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.209463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.209590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.209681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.209463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.210030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.210118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.210276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.210362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.210469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.210569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.210660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.211015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.211226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.431976 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.432089 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.531284 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.209521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.209619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.210143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.210265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.210442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.210864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.211029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.211190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.211143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.211511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.211993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.212356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.212480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.212646 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.212898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.212993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.213059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.213384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.213527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.213755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.213887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.215067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.215154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.215675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.216011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.216113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.216238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.216285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.216356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.216417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.217064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.217258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.432319 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.432470 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.208899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.209007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.209054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.208898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.208939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.209409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.431318 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.431441 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.209496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.209897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.210256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.210413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.210488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.210635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210862 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.210865 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.213208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.213323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.213581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.213695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.213978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.217270 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.433584 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.433982 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.688295 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/5.log" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.692328 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9"} Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.692941 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.209412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.209677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.209687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.210096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.210151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.209724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.210268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.210561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.211064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.211375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.211912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.433020 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.433150 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.209087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208896 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.212346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.213272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.213431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.213487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.214029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.214034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.214314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.214429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.214983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215855 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.217311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.217494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.217954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.218714 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.217738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.433259 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.433551 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.533656 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.208755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.209055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.209071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.209147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.209358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.209661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.209859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.210086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.210274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.210646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.210903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.211114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.211356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.437962 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.438098 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.721268 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.log" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.721371 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"f7be0e9008401c6756f1bf4076bb89596e4b26b5733f27692dcb45eff8e4fa5e"} Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.212437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.212748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.212761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.212959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.213054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.213255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.213506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.213646 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.213711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.214013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.214119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.214422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.214463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.214708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.215007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.215178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.215384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.215638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.216197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.216259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.216509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.216601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.216985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.217045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.217145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.217209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.217271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.217355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.217482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.217706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.217891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.217986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.218193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.223959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.224409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.224651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.225589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.225977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.434638 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.435052 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.208708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.208762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.210319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.208980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.211051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.211440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.211589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.211693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.210509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.208919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.211931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.433769 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.434416 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.209324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.209580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.209697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.210028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.210163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.210329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.210453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.210618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.210726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.210999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.211145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.211329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.211447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.211633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.211942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212123 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.212225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.212527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.212747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.213074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.213349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.213987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214815 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.215253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.215284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.215542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.215713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.215851 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.216021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.216212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.216439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.217158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.217197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.436366 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.437058 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.211265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.212059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.212149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.212269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.212598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.212883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.212961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.213051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.213066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.213156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.213247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.213347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.213714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.214076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.266574 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.432590 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.432865 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.209050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.209745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.215150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.215865 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.215947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.221505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.222895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.223083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.215911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.216010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.228028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.216046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.229269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.229562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.215977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.241601 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242056 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242067 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242113 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242336 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242411 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242501 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242535 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242550 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.241610 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242696 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242706 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242721 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242902 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243032 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243183 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243256 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243304 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243353 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243418 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243625 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243650 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243759 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243762 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.244175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.244274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.246346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.247889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.248196 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.250203 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243256 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.252738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.256433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.257146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258172 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258243 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258606 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258700 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258764 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258966 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259106 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259175 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259199 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259241 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259245 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259285 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259362 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259422 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259435 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259478 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259494 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259526 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259548 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.261272 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.261681 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.264082 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.269514 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.307591 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.308505 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.309621 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.309967 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.310290 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.310582 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.310883 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311166 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311376 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311464 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311691 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311910 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.312199 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.312374 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.312658 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.313111 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.313469 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311752 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.313112 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314268 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314444 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314669 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.315003 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314447 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.315365 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.310983 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314133 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314064 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314550 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314611 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.316354 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314289 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.317420 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.317867 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318034 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318037 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318165 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318298 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318346 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318896 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dl9g2" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.320540 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.320732 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.321535 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.322249 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.322443 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.322640 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.323503 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.323947 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.320545 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.335763 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.373275 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.377125 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.377867 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.378103 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.380902 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.382316 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.380925 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.392298 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.761730 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.771421 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.772021 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.773384 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.773751 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.775921 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.778358 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782116 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782176 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782323 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782358 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782478 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782508 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782516 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782613 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782644 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782866 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782919 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.783210 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.783263 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.787909 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.798160 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.212364 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.213613 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.219195 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.220254 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.220488 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.220649 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.220732 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.221293 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.221356 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.221537 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.222323 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.222449 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.222589 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.222762 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.224049 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.224403 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.225661 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.225720 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.225962 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.226365 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.233581 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.253567 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.275679 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.304066 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.314169 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.434430 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.434547 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:43 crc kubenswrapper[4183]: I0813 19:57:43.432432 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:43 crc kubenswrapper[4183]: I0813 19:57:43.432531 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:44 crc kubenswrapper[4183]: I0813 19:57:44.432188 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:44 crc kubenswrapper[4183]: I0813 19:57:44.432304 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:45 crc kubenswrapper[4183]: I0813 19:57:45.432995 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:45 crc kubenswrapper[4183]: I0813 19:57:45.433130 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:46 crc kubenswrapper[4183]: I0813 19:57:46.433813 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:46 crc kubenswrapper[4183]: I0813 19:57:46.433992 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:47 crc kubenswrapper[4183]: I0813 19:57:47.353241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeReady" Aug 13 19:57:47 crc kubenswrapper[4183]: I0813 19:57:47.433148 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:47 crc kubenswrapper[4183]: I0813 19:57:47.433633 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.197613 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.197747 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" podNamespace="openshift-marketplace" podName="community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.199300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.259669 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.260237 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.260552 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n59fs\" (UniqueName: \"kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.363416 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.363500 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.363691 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n59fs\" (UniqueName: \"kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.364212 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.364231 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.424550 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.424707 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" podNamespace="openshift-marketplace" podName="redhat-operators-dcqzh" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.425866 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.428554 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.428689 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" podNamespace="openshift-marketplace" podName="certified-operators-g4v97" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.429911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.432870 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-7cbd5666ff-bbfrf"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.433017 4183 topology_manager.go:215] "Topology Admit Handler" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" podNamespace="openshift-image-registry" podName="image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.433729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.436674 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.437013 4183 topology_manager.go:215] "Topology Admit Handler" podUID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251905-zmjv9" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.437705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.436687 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.441216 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.444276 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.451169 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.451289 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.493579 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.720542 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.723559 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.737102 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.756056 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-n59fs\" (UniqueName: \"kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.816858 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.981515 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.982108 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nrgl\" (UniqueName: \"kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.982213 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.982516 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.982633 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f9ss\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.982895 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.983994 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.984246 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.984410 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.984449 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.984701 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.984987 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwzcr\" (UniqueName: \"kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.985149 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzb4s\" (UniqueName: \"kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.985556 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.986030 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.986310 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.087352 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.087993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088206 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088407 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088469 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088618 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-mwzcr\" (UniqueName: \"kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088913 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nzb4s\" (UniqueName: \"kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088951 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.089277 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.089332 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.089423 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.089452 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.089987 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090317 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090496 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5nrgl\" (UniqueName: \"kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090536 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090872 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090979 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4f9ss\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.091057 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.091318 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.092134 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.092477 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.095720 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.097461 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.104405 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.336484 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7cbd5666ff-bbfrf"] Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.342516 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.362020 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nrgl\" (UniqueName: \"kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.368744 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f9ss\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.378023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.382516 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzb4s\" (UniqueName: \"kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.388390 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwzcr\" (UniqueName: \"kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.434101 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.434603 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.646975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.656723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.103073 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.163072 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9"] Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.438628 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.439249 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.806934 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerStarted","Data":"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350"} Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.808905 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" event={"ID":"8500d7bd-50fb-4ca6-af41-b7a24cae43cd","Type":"ContainerStarted","Data":"a00abbf09803bc3f3a22a86887914ba2fa3026aff021086131cdf33906d7fb2c"} Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.808974 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" event={"ID":"8500d7bd-50fb-4ca6-af41-b7a24cae43cd","Type":"ContainerStarted","Data":"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.138891 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.159169 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 19:57:51 crc kubenswrapper[4183]: W0813 19:57:51.164371 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb917686_edfb_4158_86ad_6fce0abec64c.slice/crio-2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761 WatchSource:0}: Error finding container 2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761: Status 404 returned error can't find the container with id 2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761 Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.433543 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.433646 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.828714 4183 generic.go:334] "Generic (PLEG): container finished" podID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerID="d14340d88bbcb0bdafcdb676bdd527fc02a2314081fa0355609f2faf4fe6c57a" exitCode=0 Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.828863 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerDied","Data":"d14340d88bbcb0bdafcdb676bdd527fc02a2314081fa0355609f2faf4fe6c57a"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.828914 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerStarted","Data":"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.831070 4183 generic.go:334] "Generic (PLEG): container finished" podID="bb917686-edfb-4158-86ad-6fce0abec64c" containerID="1e5547d2ec134d919f281661be1d8428aa473dba5709d51d784bbe4bf125231a" exitCode=0 Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.831131 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerDied","Data":"1e5547d2ec134d919f281661be1d8428aa473dba5709d51d784bbe4bf125231a"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.831166 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerStarted","Data":"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.834334 4183 generic.go:334] "Generic (PLEG): container finished" podID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerID="aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101" exitCode=0 Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.834419 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerDied","Data":"aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.837609 4183 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.040207 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.040326 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.040670 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n59fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k9qqb_openshift-marketplace(ccdf38cf-634a-41a2-9c8b-74bb86af80a7): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.040988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:57:52 crc kubenswrapper[4183]: I0813 19:57:52.432494 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:52 crc kubenswrapper[4183]: I0813 19:57:52.432613 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.846579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.947723 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.948212 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.948646 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nzb4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dcqzh_openshift-marketplace(6db26b71-4e04-4688-a0c0-00e06e8c888d): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.948878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.953627 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.953856 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.954051 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mwzcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g4v97_openshift-marketplace(bb917686-edfb-4158-86ad-6fce0abec64c): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.954225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:57:53 crc kubenswrapper[4183]: I0813 19:57:53.095396 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" podStartSLOduration=475.095328315 podStartE2EDuration="7m55.095328315s" podCreationTimestamp="2025-08-13 19:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 19:57:52.022866401 +0000 UTC m=+838.715531419" watchObservedRunningTime="2025-08-13 19:57:53.095328315 +0000 UTC m=+839.787992933" Aug 13 19:57:53 crc kubenswrapper[4183]: I0813 19:57:53.432381 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:53 crc kubenswrapper[4183]: I0813 19:57:53.432503 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.433767 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.433956 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.678312 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.678447 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.678541 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.678575 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.678636 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:57:55 crc kubenswrapper[4183]: I0813 19:57:55.435181 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:55 crc kubenswrapper[4183]: I0813 19:57:55.436485 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:55 crc kubenswrapper[4183]: I0813 19:57:55.859189 4183 generic.go:334] "Generic (PLEG): container finished" podID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" containerID="a00abbf09803bc3f3a22a86887914ba2fa3026aff021086131cdf33906d7fb2c" exitCode=0 Aug 13 19:57:55 crc kubenswrapper[4183]: I0813 19:57:55.859276 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" event={"ID":"8500d7bd-50fb-4ca6-af41-b7a24cae43cd","Type":"ContainerDied","Data":"a00abbf09803bc3f3a22a86887914ba2fa3026aff021086131cdf33906d7fb2c"} Aug 13 19:57:56 crc kubenswrapper[4183]: I0813 19:57:56.432581 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:56 crc kubenswrapper[4183]: I0813 19:57:56.433008 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.076399 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.214729 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume\") pod \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.214952 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nrgl\" (UniqueName: \"kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl\") pod \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.214984 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume\") pod \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.216641 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume" (OuterVolumeSpecName: "config-volume") pod "8500d7bd-50fb-4ca6-af41-b7a24cae43cd" (UID: "8500d7bd-50fb-4ca6-af41-b7a24cae43cd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.223045 4183 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume\") on node \"crc\" DevicePath \"\"" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.232093 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8500d7bd-50fb-4ca6-af41-b7a24cae43cd" (UID: "8500d7bd-50fb-4ca6-af41-b7a24cae43cd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.240859 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl" (OuterVolumeSpecName: "kube-api-access-5nrgl") pod "8500d7bd-50fb-4ca6-af41-b7a24cae43cd" (UID: "8500d7bd-50fb-4ca6-af41-b7a24cae43cd"). InnerVolumeSpecName "kube-api-access-5nrgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.330182 4183 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume\") on node \"crc\" DevicePath \"\"" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.330247 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5nrgl\" (UniqueName: \"kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl\") on node \"crc\" DevicePath \"\"" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.433681 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.433851 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.868510 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" event={"ID":"8500d7bd-50fb-4ca6-af41-b7a24cae43cd","Type":"ContainerDied","Data":"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998"} Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.868624 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.868702 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:58 crc kubenswrapper[4183]: I0813 19:57:58.432042 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:58 crc kubenswrapper[4183]: I0813 19:57:58.432152 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:59 crc kubenswrapper[4183]: I0813 19:57:59.433562 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:59 crc kubenswrapper[4183]: I0813 19:57:59.433719 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:00 crc kubenswrapper[4183]: I0813 19:58:00.431964 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:00 crc kubenswrapper[4183]: I0813 19:58:00.432051 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:01 crc kubenswrapper[4183]: I0813 19:58:01.434217 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:01 crc kubenswrapper[4183]: I0813 19:58:01.434297 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:02 crc kubenswrapper[4183]: I0813 19:58:02.436078 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:02 crc kubenswrapper[4183]: I0813 19:58:02.436184 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:03 crc kubenswrapper[4183]: I0813 19:58:03.434049 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:03 crc kubenswrapper[4183]: I0813 19:58:03.434158 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:04 crc kubenswrapper[4183]: I0813 19:58:04.431247 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:04 crc kubenswrapper[4183]: I0813 19:58:04.433048 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:05 crc kubenswrapper[4183]: I0813 19:58:05.433205 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:05 crc kubenswrapper[4183]: I0813 19:58:05.433339 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:06 crc kubenswrapper[4183]: E0813 19:58:06.337633 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:58:06 crc kubenswrapper[4183]: E0813 19:58:06.337723 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:58:06 crc kubenswrapper[4183]: E0813 19:58:06.338150 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n59fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k9qqb_openshift-marketplace(ccdf38cf-634a-41a2-9c8b-74bb86af80a7): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:06 crc kubenswrapper[4183]: E0813 19:58:06.338265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:58:06 crc kubenswrapper[4183]: I0813 19:58:06.435695 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:06 crc kubenswrapper[4183]: I0813 19:58:06.436073 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:07 crc kubenswrapper[4183]: I0813 19:58:07.434455 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:07 crc kubenswrapper[4183]: I0813 19:58:07.434626 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.318713 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.320372 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.320732 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nzb4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dcqzh_openshift-marketplace(6db26b71-4e04-4688-a0c0-00e06e8c888d): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.321019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.320478 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.324305 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.324482 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mwzcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g4v97_openshift-marketplace(bb917686-edfb-4158-86ad-6fce0abec64c): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.324587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:58:08 crc kubenswrapper[4183]: I0813 19:58:08.434303 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:08 crc kubenswrapper[4183]: I0813 19:58:08.434446 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:09 crc kubenswrapper[4183]: I0813 19:58:09.438110 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:09 crc kubenswrapper[4183]: I0813 19:58:09.438240 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:10 crc kubenswrapper[4183]: I0813 19:58:10.432062 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:10 crc kubenswrapper[4183]: I0813 19:58:10.432208 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:11 crc kubenswrapper[4183]: I0813 19:58:11.433134 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:11 crc kubenswrapper[4183]: I0813 19:58:11.433293 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:12 crc kubenswrapper[4183]: I0813 19:58:12.433039 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:12 crc kubenswrapper[4183]: I0813 19:58:12.433197 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:13 crc kubenswrapper[4183]: I0813 19:58:13.432221 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:13 crc kubenswrapper[4183]: I0813 19:58:13.432940 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:14 crc kubenswrapper[4183]: I0813 19:58:14.432003 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:14 crc kubenswrapper[4183]: I0813 19:58:14.432115 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:15 crc kubenswrapper[4183]: I0813 19:58:15.434366 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:15 crc kubenswrapper[4183]: I0813 19:58:15.434536 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:16 crc kubenswrapper[4183]: I0813 19:58:16.433911 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:16 crc kubenswrapper[4183]: I0813 19:58:16.434117 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:16 crc kubenswrapper[4183]: I0813 19:58:16.434269 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:58:16 crc kubenswrapper[4183]: I0813 19:58:16.435901 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac"} pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" containerMessage="Container router failed startup probe, will be restarted" Aug 13 19:58:16 crc kubenswrapper[4183]: I0813 19:58:16.435988 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" containerID="cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac" gracePeriod=3600 Aug 13 19:58:21 crc kubenswrapper[4183]: E0813 19:58:21.211747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:58:22 crc kubenswrapper[4183]: E0813 19:58:22.211080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:58:23 crc kubenswrapper[4183]: E0813 19:58:23.210866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:58:32 crc kubenswrapper[4183]: E0813 19:58:32.354289 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:58:32 crc kubenswrapper[4183]: E0813 19:58:32.354912 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:58:32 crc kubenswrapper[4183]: E0813 19:58:32.355202 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n59fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k9qqb_openshift-marketplace(ccdf38cf-634a-41a2-9c8b-74bb86af80a7): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:32 crc kubenswrapper[4183]: E0813 19:58:32.355269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.313227 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.313316 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.313602 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mwzcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g4v97_openshift-marketplace(bb917686-edfb-4158-86ad-6fce0abec64c): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.313672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.314935 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.314991 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.315100 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nzb4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dcqzh_openshift-marketplace(6db26b71-4e04-4688-a0c0-00e06e8c888d): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.315148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:58:46 crc kubenswrapper[4183]: E0813 19:58:46.213435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:58:46 crc kubenswrapper[4183]: E0813 19:58:46.214118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:58:47 crc kubenswrapper[4183]: E0813 19:58:47.211121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080127 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080216 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080259 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080316 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080425 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080465 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080567 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080612 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080824 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080995 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081066 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081121 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081186 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081251 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081320 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081397 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081433 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.082031 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.082076 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.082112 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.082150 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.082187 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.097046 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.098249 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.098579 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100112 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100300 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100465 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100595 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100720 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100903 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100963 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100738 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101123 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101188 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101134 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101288 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101366 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101482 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101562 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101485 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101434 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.102433 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.102486 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.102574 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.104960 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.106550 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.106853 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.109448 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.115525 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.118523 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.120930 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.121983 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.125282 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.125352 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.125507 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.126536 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.129603 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.132968 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.133390 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.133558 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.133718 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.133767 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.133918 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.134768 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.135522 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.136703 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.137371 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.140741 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.141097 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.142731 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.184422 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.184966 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.185619 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.185953 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.186153 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.186467 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.186944 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.187109 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.187445 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.193122 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.199636 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.201391 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.202267 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.204150 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.204993 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.205435 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.206269 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.210386 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.214730 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.216506 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.218405 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.220324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.221533 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.224521 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.238013 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.238136 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.248146 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290025 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290237 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290272 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290456 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290515 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290589 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290677 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290827 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290898 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290934 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290974 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291006 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291053 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291088 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291121 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291160 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291291 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291330 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291392 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291457 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291487 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291516 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291588 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291614 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291670 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291935 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291992 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292034 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292072 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292108 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292154 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292183 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292213 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292247 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292302 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292327 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292352 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292386 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292422 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292450 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292480 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292508 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292532 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292594 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292618 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292654 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292681 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292750 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292866 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292912 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292939 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292982 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293008 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293031 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293058 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293094 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293162 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293199 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293232 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293271 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293360 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.294860 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.299637 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.301252 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.302211 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.302283 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.302424 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.307601 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.308881 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.309144 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.309362 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.310231 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.313753 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.314121 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.314221 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315010 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315064 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315190 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315243 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315508 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.317130 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.318242 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.320458 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.321129 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.321459 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.321901 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.325876 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.325991 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.326425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.327555 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.328657 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.331008 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.331503 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.331983 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.332229 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.332639 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.338987 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.339016 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.335887 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.339947 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.341726 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336054 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336159 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336223 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336368 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336530 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336626 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336744 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336972 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337111 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337240 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337373 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337436 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337619 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337677 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337693 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.348095 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.348521 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.355957 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.362342 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.363214 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.363612 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.400108 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.400259 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.402007 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.367088 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.368167 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.368302 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.369592 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.370043 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.372150 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.372521 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.373311 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.409707 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.321217 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.390198 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.384106 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.395368 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.395563 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.321494 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.395669 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.393617 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.396453 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.396729 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.416231 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.397048 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.397222 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.397500 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.397694 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.398396 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.395664 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.417493 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.417534 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.417702 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.418067 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.420976 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.422725 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.431009 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.439899 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.440300 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.440377 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.441131 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.442587 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.421919 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.443403 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.443710 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.443991 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.444208 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.444393 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.447106 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: E0813 19:58:54.448506 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 20:00:56.448363925 +0000 UTC m=+1023.141028744 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.450060 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.450766 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.451757 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.451936 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.452496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454480 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454555 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454642 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454686 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454853 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454889 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454916 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454943 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454969 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455021 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455045 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455083 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455114 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455146 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455178 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455232 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455264 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455315 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455339 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455384 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455411 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455444 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455487 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455533 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455570 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455597 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455624 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455657 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455682 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455718 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455871 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455899 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455975 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.457042 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.457406 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.457463 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.457575 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.464222 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.465881 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.466387 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.471110 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.471856 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.472186 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.472991 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.475297 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.476227 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.476432 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.476713 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.488593 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.489082 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.493037 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.493886 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.495258 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.497182 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.497293 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.503497 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.510602 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.512317 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.512639 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.512928 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513074 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513259 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513276 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513425 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513479 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513585 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513994 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514134 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514230 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514270 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514464 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514484 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514690 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514954 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.515130 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.516452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.521692 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.522016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.522394 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.522642 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.523288 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.523771 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.524764 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.525530 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.526728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.527908 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.529986 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.530150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.530339 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.531438 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.532171 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.532502 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.533421 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.535007 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.535185 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.535903 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.537752 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.538292 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.539487 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.539883 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.540175 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.540439 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.540740 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.542768 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.542907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.545213 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.557110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.558604 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.564140 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.564514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.568286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.572614 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.579070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.588214 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.588667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.597455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.602158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.607672 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.608537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.621518 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.623956 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.635440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.647748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.652661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.668527 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.670606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.670688 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.672019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.681257 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.681384 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.681426 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.681481 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.681503 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.686996 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.687358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.687616 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.698272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.702768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.706755 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.713401 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.717365 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.724723 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.724718 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.725372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.744518 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.745493 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.760719 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.763596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.764477 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.775288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.778455 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.794056 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.795378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.797673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.799550 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.804231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.804981 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.826227 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.828321 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.838267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.839614 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.839765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.854303 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.863165 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.869181 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.870553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.881145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.886198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.890507 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.892445 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.904768 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.908429 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.917146 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.935259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.935682 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dl9g2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.936047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.936096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.936354 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.936461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.948120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.972746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:55 crc kubenswrapper[4183]: I0813 19:58:55.017116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:58:55 crc kubenswrapper[4183]: I0813 19:58:55.017340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:58:55 crc kubenswrapper[4183]: I0813 19:58:55.017900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:55 crc kubenswrapper[4183]: I0813 19:58:55.203144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:55 crc kubenswrapper[4183]: I0813 19:58:55.203212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:56 crc kubenswrapper[4183]: I0813 19:58:56.183104 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"a3a061a59b867b60a3e6a1a13d08ce968a7bfbe260f6cd0b17972429364f2dff"} Aug 13 19:58:56 crc kubenswrapper[4183]: I0813 19:58:56.198351 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"cb33d2fb758e44ea5d6c5308cf6a0c2e4f669470cf12ebbac204a7dbd9719cdb"} Aug 13 19:58:56 crc kubenswrapper[4183]: W0813 19:58:56.443884 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd556935_a077_45df_ba3f_d42c39326ccd.slice/crio-3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219 WatchSource:0}: Error finding container 3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219: Status 404 returned error can't find the container with id 3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219 Aug 13 19:58:56 crc kubenswrapper[4183]: W0813 19:58:56.457129 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda702c6d2_4dde_4077_ab8c_0f8df804bf7a.slice/crio-2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5 WatchSource:0}: Error finding container 2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5: Status 404 returned error can't find the container with id 2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5 Aug 13 19:58:56 crc kubenswrapper[4183]: W0813 19:58:56.870876 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63eb7413_02c3_4d6e_bb48_e5ffe5ce15be.slice/crio-51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724 WatchSource:0}: Error finding container 51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724: Status 404 returned error can't find the container with id 51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724 Aug 13 19:58:56 crc kubenswrapper[4183]: W0813 19:58:56.887154 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f4dca86_e6ee_4ec9_8324_86aff960225e.slice/crio-042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933 WatchSource:0}: Error finding container 042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933: Status 404 returned error can't find the container with id 042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933 Aug 13 19:58:57 crc kubenswrapper[4183]: W0813 19:58:57.173735 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4092a9f8_5acc_4932_9e90_ef962eeb301a.slice/crio-40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748 WatchSource:0}: Error finding container 40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748: Status 404 returned error can't find the container with id 40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748 Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.210363 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"caf64d49987c99e4ea9efe593e0798b0aa755d8fdf7441c0156e1863763a7aa0"} Aug 13 19:58:57 crc kubenswrapper[4183]: W0813 19:58:57.222952 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf1a8966_f594_490a_9fbb_eec5bafd13d3.slice/crio-44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2 WatchSource:0}: Error finding container 44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2: Status 404 returned error can't find the container with id 44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2 Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.268604 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.268665 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"2e8f0bacebafcab5bbf3b42b7e4297638b1e6acfcc74bfc10076897a7be4d368"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.268703 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.268728 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"1f2d8ae3277a5b2f175e31e08d91633d08f596d9399c619715c2f8b9fe7a9cf2"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.337665 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerStarted","Data":"07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.719372 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerStarted","Data":"489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.741147 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerStarted","Data":"9ed66fef0dec7ca57bc8a1a3ccbadd74658c15ad523b6b56b58becdb98c703e8"} Aug 13 19:58:58 crc kubenswrapper[4183]: I0813 19:58:58.206658 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" event={"ID":"bd556935-a077-45df-ba3f-d42c39326ccd","Type":"ContainerStarted","Data":"3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219"} Aug 13 19:58:58 crc kubenswrapper[4183]: I0813 19:58:58.236049 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerStarted","Data":"40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748"} Aug 13 19:58:58 crc kubenswrapper[4183]: I0813 19:58:58.358206 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" event={"ID":"8a5ae51d-d173-4531-8975-f164c975ce1f","Type":"ContainerStarted","Data":"861ac63b0e0c6ab1fc9beb841998e0e5dd2860ed632f8f364e94f575b406c884"} Aug 13 19:58:58 crc kubenswrapper[4183]: I0813 19:58:58.361406 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerStarted","Data":"76a23bcc5261ffef3e87aed770d502891d5cf930ce8f5608091c10c4c2f8355e"} Aug 13 19:58:58 crc kubenswrapper[4183]: I0813 19:58:58.432246 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerStarted","Data":"042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.500593 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" event={"ID":"378552fd-5e53-4882-87ff-95f3d9198861","Type":"ContainerStarted","Data":"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.506781 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fccc7b6-mkncc" event={"ID":"b233d916-bfe3-4ae5-ae39-6b574d1aa05e","Type":"ContainerStarted","Data":"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.512361 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"88c60b5e25b2ce016efe1942b67b182d4d9c87cf3eb10c9dc1468dc3abce4e98"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.527693 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" event={"ID":"cf1a8966-f594-490a-9fbb-eec5bafd13d3","Type":"ContainerStarted","Data":"44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.539266 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerStarted","Data":"fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.545732 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"20a42c53825c9180dbf4c0a948617094d91e080fc40247547ca99c537257a821"} Aug 13 19:58:59 crc kubenswrapper[4183]: E0813 19:58:59.842138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:58:59 crc kubenswrapper[4183]: E0813 19:58:59.842286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:59:00 crc kubenswrapper[4183]: I0813 19:59:00.718280 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb"} Aug 13 19:59:00 crc kubenswrapper[4183]: I0813 19:59:00.740672 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"ffa2ba8d5c39d98cd54f79874d44a75e8535b740b4e7b22d06c01c67e926ca36"} Aug 13 19:59:00 crc kubenswrapper[4183]: W0813 19:59:00.755194 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b5c38ff_1fa8_4219_994d_15776acd4a4d.slice/crio-2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892 WatchSource:0}: Error finding container 2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892: Status 404 returned error can't find the container with id 2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892 Aug 13 19:59:00 crc kubenswrapper[4183]: W0813 19:59:00.761219 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13ad7555_5f28_4555_a563_892713a8433a.slice/crio-8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141 WatchSource:0}: Error finding container 8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141: Status 404 returned error can't find the container with id 8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141 Aug 13 19:59:00 crc kubenswrapper[4183]: I0813 19:59:00.877647 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"cde7b91dcd48d4e06df4d6dec59646da2d7b63ba4245f33286ad238c06706436"} Aug 13 19:59:00 crc kubenswrapper[4183]: W0813 19:59:00.927578 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13045510_8717_4a71_ade4_be95a76440a7.slice/crio-63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc WatchSource:0}: Error finding container 63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc: Status 404 returned error can't find the container with id 63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc Aug 13 19:59:01 crc kubenswrapper[4183]: W0813 19:59:01.027943 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d67253e_2acd_4bc1_8185_793587da4f17.slice/crio-282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722 WatchSource:0}: Error finding container 282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722: Status 404 returned error can't find the container with id 282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722 Aug 13 19:59:01 crc kubenswrapper[4183]: E0813 19:59:01.219981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:59:02 crc kubenswrapper[4183]: I0813 19:59:02.429123 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc"} Aug 13 19:59:02 crc kubenswrapper[4183]: I0813 19:59:02.542201 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerStarted","Data":"2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892"} Aug 13 19:59:02 crc kubenswrapper[4183]: I0813 19:59:02.635732 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerStarted","Data":"282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722"} Aug 13 19:59:02 crc kubenswrapper[4183]: I0813 19:59:02.804379 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" event={"ID":"13ad7555-5f28-4555-a563-892713a8433a","Type":"ContainerStarted","Data":"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141"} Aug 13 19:59:02 crc kubenswrapper[4183]: I0813 19:59:02.933327 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerStarted","Data":"2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d"} Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.191186 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"a7b73c0ecb48e250899c582dd00bb24b7714077ab1f62727343c931aaa84b579"} Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.265525 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" event={"ID":"bd556935-a077-45df-ba3f-d42c39326ccd","Type":"ContainerStarted","Data":"3137e2c39453dcdeff67eb557e1f28db273455a3b55a18b79757d9f183fde4e9"} Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.268364 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.284147 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.284445 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.299428 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" event={"ID":"8a5ae51d-d173-4531-8975-f164c975ce1f","Type":"ContainerStarted","Data":"2a3de049472dc73b116b7c97ddeb21440fd8f50594e5e9dd726a1c1cfe0bf588"} Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.300463 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.302653 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.302736 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.307569 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"96c6df9a2045ea9da57200221317b32730a7efb228b812d5bc7a5eef696963f6"} Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.528566 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.529978 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.528729 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.530099 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.538973 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.539071 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.541196 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.541284 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.818165 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmwfn"] Aug 13 19:59:05 crc kubenswrapper[4183]: W0813 19:59:05.099673 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ad279b4_d9dc_42a8_a1c8_a002bd063482.slice/crio-9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7 WatchSource:0}: Error finding container 9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7: Status 404 returned error can't find the container with id 9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7 Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.361704 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" event={"ID":"c085412c-b875-46c9-ae3e-e6b0d8067091","Type":"ContainerStarted","Data":"7c70e17033c682195efbddb8b127b02b239fc67e597936ebf8283a79edea04e3"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.428931 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" event={"ID":"34a48baf-1bee-4921-8bb2-9b7320e76f79","Type":"ContainerStarted","Data":"5aa1911bfbbdddf05ac698792baebff15593339de601d73adeab5547c57d456a"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.442340 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerStarted","Data":"b27ef0e5311849c50317136877d704c05729518c9dcec03ecef2bf1dc575fbe7"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.452974 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerStarted","Data":"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.469059 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" event={"ID":"af6b67a3-a2bd-4051-9adc-c208a5a65d79","Type":"ContainerStarted","Data":"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.700655 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" event={"ID":"378552fd-5e53-4882-87ff-95f3d9198861","Type":"ContainerStarted","Data":"47fe4a48f20f31be64ae9b101ef8f82942a11a5dc253da7cd8d82bea357cc9c7"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.737738 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fccc7b6-mkncc" event={"ID":"b233d916-bfe3-4ae5-ae39-6b574d1aa05e","Type":"ContainerStarted","Data":"a4a4a30f20f748c27de48f589b297456dbde26c9c06b9c1e843ce69a376e85a9"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.748648 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerStarted","Data":"906e45421a720cb9e49c934ec2f44b74221d2f79757d98a1581d6bf6a1fc5f31"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.755641 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"10cfef5f94c814cc8355e17d7fdcccd543ac26c393e3a7c8452af1219913ea3a"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.780538 4183 generic.go:334] "Generic (PLEG): container finished" podID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerID="79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649" exitCode=0 Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.782330 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerDied","Data":"79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.808228 4183 generic.go:334] "Generic (PLEG): container finished" podID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerID="6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac" exitCode=0 Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.808611 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerDied","Data":"6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.808772 4183 scope.go:117] "RemoveContainer" containerID="4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02" Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.862679 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"11a119fa806fd94f2b3718680e62c440fc53a5fd0df6934b156abf3171c59e5b"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.002575 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerStarted","Data":"526dc34c7f0224642660d74a0d2dc6ff8a8ffcb683f16dcb88b66dd5d2363e0a"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.137683 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-8-crc"] Aug 13 19:59:06 crc kubenswrapper[4183]: E0813 19:59:06.220277 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:06 crc kubenswrapper[4183]: E0813 19:59:06.220400 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:06 crc kubenswrapper[4183]: E0813 19:59:06.220580 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ncrf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7287f_openshift-marketplace(887d596e-c519-4bfa-af90-3edd9e1b2f0f): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:06 crc kubenswrapper[4183]: E0813 19:59:06.220642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.221163 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"ae65970c89fa0f40e01774098114a6c64c75a67483be88aef59477e78bbb3f33"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.516774 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerStarted","Data":"30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.546937 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" event={"ID":"0b5d722a-1123-4935-9740-52a08d018bc9","Type":"ContainerStarted","Data":"4146ac88f77df20ec1239010fef77264fc27e17e8819d70b5707697a50193ca3"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.553253 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"aab926f26907ff6a0818e2560ab90daa29fc5dd04e9bc7ca22bafece60120f4d"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.625622 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" event={"ID":"cf1a8966-f594-490a-9fbb-eec5bafd13d3","Type":"ContainerStarted","Data":"078835e6e37f63907310c41b225ef71d7be13426f87f8b32c57e6b2e8c13a5a8"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.626522 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.626623 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.649644 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.649752 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:07 crc kubenswrapper[4183]: W0813 19:59:06.994479 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9127708_ccfd_4891_8a3a_f0cacb77e0f4.slice/crio-0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238 WatchSource:0}: Error finding container 0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238: Status 404 returned error can't find the container with id 0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238 Aug 13 19:59:07 crc kubenswrapper[4183]: W0813 19:59:07.069131 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ae0dfbb_a0a9_45bb_85b5_cd9f94f64fe7.slice/crio-717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5 WatchSource:0}: Error finding container 717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5: Status 404 returned error can't find the container with id 717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5 Aug 13 19:59:07 crc kubenswrapper[4183]: W0813 19:59:07.241660 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d51f445_054a_4e4f_a67b_a828f5a32511.slice/crio-22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed WatchSource:0}: Error finding container 22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed: Status 404 returned error can't find the container with id 22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.687314 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"cd3ef5d43082d2ea06ff8ebdc73d431372f8a376212f30c5008a7b9579df7014"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.708549 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" event={"ID":"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab","Type":"ContainerStarted","Data":"961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.778736 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerStarted","Data":"2c4363bf35c3850ea69697df9035284b39acfc987f5b168c9bf3f20002f44039"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.789641 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.867302 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" event={"ID":"87df87f4-ba66-4137-8e41-1fa632ad4207","Type":"ContainerStarted","Data":"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.914018 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerStarted","Data":"20a713ea366c19c1b427548e8b8ab979d2ae1d350c086fe1a4874181f4de7687"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.984149 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerStarted","Data":"e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.082174 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" event={"ID":"59748b9b-c309-4712-aa85-bb38d71c4915","Type":"ContainerStarted","Data":"a10fd87b4b9fef36cf95839340b0ecf97070241659beb7fea58a63794a40a007"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.130544 4183 generic.go:334] "Generic (PLEG): container finished" podID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerID="30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8" exitCode=0 Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.130656 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerDied","Data":"30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.206688 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" event={"ID":"72854c1e-5ae2-4ed6-9e50-ff3bccde2635","Type":"ContainerStarted","Data":"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.259460 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.313212 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" event={"ID":"d0f40333-c860-4c04-8058-a0bf572dcf12","Type":"ContainerStarted","Data":"97418fd7ce5644b997f128bada5bb6c90d375c9d7626fb1d5981b09a8d6771d7"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.326680 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerStarted","Data":"717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5"} Aug 13 19:59:08 crc kubenswrapper[4183]: E0813 19:59:08.399579 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:08 crc kubenswrapper[4183]: E0813 19:59:08.399704 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:08 crc kubenswrapper[4183]: E0813 19:59:08.400079 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ptdrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f4jkp_openshift-marketplace(4092a9f8-5acc-4932-9e90-ef962eeb301a): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:08 crc kubenswrapper[4183]: E0813 19:59:08.400136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.467595 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"ce1a5d3596103f2604e3421cb68ffd62e530298f3c2a7b8074896c2e7152c621"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.612595 4183 generic.go:334] "Generic (PLEG): container finished" podID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerID="96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4" exitCode=0 Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.613514 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerDied","Data":"96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.716179 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"7b2c6478f4940bab46ab22fb59aeffb640ce0f0e8ccd61b80c50a3afdd842157"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.718077 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.729190 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.729275 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.934742 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerStarted","Data":"0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238"} Aug 13 19:59:09 crc kubenswrapper[4183]: E0813 19:59:09.290158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.018352 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerStarted","Data":"dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.081748 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerStarted","Data":"c00af436eed79628e0e4901e79048ca0af8fcfc3099b202cf5fa799464c7fc03"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.135170 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" event={"ID":"af6b67a3-a2bd-4051-9adc-c208a5a65d79","Type":"ContainerStarted","Data":"aa3bd53db5b871b1e7ccc9029bf14c3e8c4163839c67447dd344680fd1080e59"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.167201 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"24d2c9dad5c7f6fd94e47dca912545c4f5b5cbadb90c11ba477fb1b512f0e277"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.192024 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"459e80350bae6577b517dba7ef99686836a51fad11f6f4125003b262f73ebf17"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.224534 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"d6d93047e42b7c37ac294d852c1865b360a39c098b65b453bf43202316d1ee5f"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.225748 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.225873 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.278220 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerStarted","Data":"de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc"} Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.318271 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" event={"ID":"c085412c-b875-46c9-ae3e-e6b0d8067091","Type":"ContainerStarted","Data":"17f6677962bd95967c105804158d24c9aee9eb80515bdbdb6c49e51ae42b0a5c"} Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.318621 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.328253 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.328368 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.356477 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"8ef23ac527350f7127dc72ec6d1aba3bba5c4b14a730a4bd909a3fdfd399378c"} Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.411405 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"653c5a1f52832901395f8f14e559c992fce4ce38bc73620d39cf1567c2981bf9"} Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.418058 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.427601 4183 patch_prober.go:28] interesting pod/route-controller-manager-5c4dbb8899-tchz5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.427687 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.431216 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.441212 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.441307 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.490618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.491308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.908493 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.909163 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.909333 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n6sqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8jhz6_openshift-marketplace(3f4dca86-e6ee-4ec9-8324-86aff960225e): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.909391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.492982 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" event={"ID":"13ad7555-5f28-4555-a563-892713a8433a","Type":"ContainerStarted","Data":"0c7b53a35a67b2526c5310571264cb255c68a5ac90b79fcfed3ea524243646e1"} Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.521463 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.521576 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.522274 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.675186 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.678052 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.741163 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"7a017f2026334b4ef3c2c72644e98cd26b3feafb1ad74386d1d7e4999fa9e9bb"} Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.893079 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.893258 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Aug 13 19:59:13 crc kubenswrapper[4183]: I0813 19:59:13.457120 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:13 crc kubenswrapper[4183]: I0813 19:59:13.458286 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:13 crc kubenswrapper[4183]: E0813 19:59:13.555577 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:13 crc kubenswrapper[4183]: E0813 19:59:13.556327 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:13 crc kubenswrapper[4183]: E0813 19:59:13.557394 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n59fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k9qqb_openshift-marketplace(ccdf38cf-634a-41a2-9c8b-74bb86af80a7): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:13 crc kubenswrapper[4183]: E0813 19:59:13.557571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:59:13 crc kubenswrapper[4183]: I0813 19:59:13.893152 4183 patch_prober.go:28] interesting pod/route-controller-manager-5c4dbb8899-tchz5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:13 crc kubenswrapper[4183]: I0813 19:59:13.893326 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:13.988691 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" event={"ID":"87df87f4-ba66-4137-8e41-1fa632ad4207","Type":"ContainerStarted","Data":"5a16f80522246f66629d4cfcf1e317f7a3db9cc08045c713b73797a46e8882df"} Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:13.990019 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.002280 4183 patch_prober.go:28] interesting pod/controller-manager-6ff78978b4-q4vv8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.002505 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.023732 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerStarted","Data":"c39ec2f009f84a11146853eb53b1073037d39ef91f4d853abf6b613d7e2758e6"} Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.061266 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerStarted","Data":"346fc13eab4a6442e7eb6bb7019dac9a1216274ae59cd519b5e7474a1dd1b4e2"} Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.125384 4183 generic.go:334] "Generic (PLEG): container finished" podID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerID="c00af436eed79628e0e4901e79048ca0af8fcfc3099b202cf5fa799464c7fc03" exitCode=0 Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.125542 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerDied","Data":"c00af436eed79628e0e4901e79048ca0af8fcfc3099b202cf5fa799464c7fc03"} Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.265455 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9"} Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.266575 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.269384 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.269458 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.409125 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.409241 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.440141 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.440285 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.528690 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.531286 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.532753 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.531345 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.533736 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.536046 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.540190 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.544531 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.696686 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.696924 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.712116 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.712236 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.901883 4183 patch_prober.go:28] interesting pod/controller-manager-6ff78978b4-q4vv8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.902317 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.902415 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.902445 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.920225 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.920358 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.951462 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.951540 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.955313 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.955461 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.027582 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.027930 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.295721 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.460713 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.460930 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.553274 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.553471 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.554294 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": context deadline exceeded" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.554327 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": context deadline exceeded" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.678220 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"de6ce3128562801aa3c24e80d49667d2d193ade88fcdf509085e61d3d048041e"} Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.708219 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" event={"ID":"34a48baf-1bee-4921-8bb2-9b7320e76f79","Type":"ContainerStarted","Data":"21441aa058a7fc7abd5477d6c596271f085a956981f7a1240f7a277a497c7755"} Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.709051 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.840114 4183 generic.go:334] "Generic (PLEG): container finished" podID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerID="c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963" exitCode=0 Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.841377 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963"} Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.842433 4183 patch_prober.go:28] interesting pod/controller-manager-6ff78978b4-q4vv8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.842496 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.842989 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.843050 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.850667 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.850753 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.092412 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.092516 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.092636 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tf29r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8s8pc_openshift-marketplace(c782cf62-a827-4677-b3c2-6f82c5f09cbb): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.092723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.435723 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.436359 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.436499 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mwzcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g4v97_openshift-marketplace(bb917686-edfb-4158-86ad-6fce0abec64c): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.436555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:59:16 crc kubenswrapper[4183]: I0813 19:59:16.450177 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:16 crc kubenswrapper[4183]: I0813 19:59:16.450374 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:16 crc kubenswrapper[4183]: I0813 19:59:16.993579 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"55fde84744bf28e99782e189a6f37f50b90f68a3503eb7f58d9744fc329b3ad0"} Aug 13 19:59:16 crc kubenswrapper[4183]: I0813 19:59:16.995511 4183 patch_prober.go:28] interesting pod/controller-manager-6ff78978b4-q4vv8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Aug 13 19:59:16 crc kubenswrapper[4183]: I0813 19:59:16.995591 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Aug 13 19:59:17 crc kubenswrapper[4183]: E0813 19:59:17.011104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:59:17 crc kubenswrapper[4183]: I0813 19:59:17.450267 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:17 crc kubenswrapper[4183]: I0813 19:59:17.451048 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.013627 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerStarted","Data":"47802e2c3506925156013fb9ab1b2e35c0b10d40b6540eabeb02eed57b691069"} Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.027744 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerStarted","Data":"de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220"} Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.036728 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" event={"ID":"0b5d722a-1123-4935-9740-52a08d018bc9","Type":"ContainerStarted","Data":"097e790a946b216a85d0fae9757cd924373f90ee6f60238beb63ed4aaad70a83"} Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.052644 4183 generic.go:334] "Generic (PLEG): container finished" podID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerID="1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3" exitCode=0 Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.053390 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerDied","Data":"1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3"} Aug 13 19:59:18 crc kubenswrapper[4183]: E0813 19:59:18.221555 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:18 crc kubenswrapper[4183]: E0813 19:59:18.222256 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:18 crc kubenswrapper[4183]: E0813 19:59:18.222765 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-r7dbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rmwfn_openshift-marketplace(9ad279b4-d9dc-42a8-a1c8-a002bd063482): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:18 crc kubenswrapper[4183]: E0813 19:59:18.223280 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.455540 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.455705 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:19 crc kubenswrapper[4183]: I0813 19:59:19.132644 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" event={"ID":"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab","Type":"ContainerStarted","Data":"b52df8e62a367664028244f096d775f6f9e6f572cd730e4e147620381f6880c3"} Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:19.179333 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"7affac532533ef0eeb1ab47860360791c20d3b170a8f0f2ff3a4172b7a3e0d60"} Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:19.179418 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:59:20 crc kubenswrapper[4183]: E0813 19:59:19.322218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:19.481340 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:19.481422 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.187629 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerStarted","Data":"c5e2f15a8db655a6a0bf0f0e7b58aa9539a6061f0ba62d00544e8ae2fda4799c"} Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.191395 4183 generic.go:334] "Generic (PLEG): container finished" podID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerID="b52df8e62a367664028244f096d775f6f9e6f572cd730e4e147620381f6880c3" exitCode=0 Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.193318 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" event={"ID":"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab","Type":"ContainerDied","Data":"b52df8e62a367664028244f096d775f6f9e6f572cd730e4e147620381f6880c3"} Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.431924 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.444106 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.444186 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:20 crc kubenswrapper[4183]: E0813 19:59:20.578019 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:20 crc kubenswrapper[4183]: E0813 19:59:20.578086 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:20 crc kubenswrapper[4183]: E0813 19:59:20.578199 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ncrf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7287f_openshift-marketplace(887d596e-c519-4bfa-af90-3edd9e1b2f0f): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:20 crc kubenswrapper[4183]: E0813 19:59:20.578250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:21 crc kubenswrapper[4183]: I0813 19:59:21.439511 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:21 crc kubenswrapper[4183]: I0813 19:59:21.440174 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.321313 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" event={"ID":"59748b9b-c309-4712-aa85-bb38d71c4915","Type":"ContainerStarted","Data":"c58eafce8379a44387b88a8f240cc4db0f60e96be3a329c57feb7b3d30a9c1df"} Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.323541 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.333687 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.334196 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.395051 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"8d517c0fc52e9a1039f5e59cdbb937f13503c7a4c1c4b293a874285946b48f38"} Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.444092 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.444232 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:23 crc kubenswrapper[4183]: E0813 19:59:23.383529 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:23 crc kubenswrapper[4183]: E0813 19:59:23.383975 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:23 crc kubenswrapper[4183]: E0813 19:59:23.384097 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ptdrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f4jkp_openshift-marketplace(4092a9f8-5acc-4932-9e90-ef962eeb301a): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:23 crc kubenswrapper[4183]: E0813 19:59:23.384157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.446637 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.446729 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.541045 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"616a149529a4e62cb9a66b620ce134ef7451a62a02ea4564d08effb1afb8a8e3"} Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.543191 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gbw49" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.550606 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gbw49" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.583318 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" event={"ID":"72854c1e-5ae2-4ed6-9e50-ff3bccde2635","Type":"ContainerStarted","Data":"b84a7ab7f1820bc9c15f1779999dcf04a421b3a4ef043acf93ea2f14cdcff7d9"} Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.589691 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerStarted","Data":"47f4fe3d214f9afb61d4c14f1173afecfd243739000ced3d025f9611dbfd4239"} Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.594615 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.595949 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.596185 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.616582 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.616746 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.442155 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.442662 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.525297 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.526345 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.528019 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.529015 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.567026 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.621020 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"1cca846256bf85cbd7c7f47d78ffd3a017ed62ad697f87acb64600f492c2e556"} Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.628659 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" event={"ID":"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab","Type":"ContainerStarted","Data":"a9c5c60859fe5965d3e56b1f36415e36c4ebccf094bcf5a836013b9db4262143"} Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.655400 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.656171 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.665497 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.665614 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.666135 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" event={"ID":"d0f40333-c860-4c04-8058-a0bf572dcf12","Type":"ContainerStarted","Data":"882d38708fa83bc398808c0ce244f77c0ef2b6ab6f69e988b1f27aaea5d0229e"} Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.672329 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"19ec4c1780cc88a3cfba567eee52fe5f2e6994b97cbb3947d1ab13f0c4146bf5"} Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.675828 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.676112 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.681676 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.682043 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.698210 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.807965 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.876653 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.876737 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.877108 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.877152 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.889020 4183 patch_prober.go:28] interesting pod/controller-manager-6ff78978b4-q4vv8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.889129 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.960069 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.961051 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.987503 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.987631 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.987733 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.987653 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.020461 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.020575 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.021135 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.021177 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.021239 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.021272 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:25 crc kubenswrapper[4183]: E0813 19:59:25.218175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:59:25 crc kubenswrapper[4183]: E0813 19:59:25.373679 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:25 crc kubenswrapper[4183]: E0813 19:59:25.374597 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:25 crc kubenswrapper[4183]: E0813 19:59:25.374931 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nzb4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dcqzh_openshift-marketplace(6db26b71-4e04-4688-a0c0-00e06e8c888d): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:25 crc kubenswrapper[4183]: E0813 19:59:25.374982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.434518 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.434683 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.688139 4183 generic.go:334] "Generic (PLEG): container finished" podID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" containerID="b84a7ab7f1820bc9c15f1779999dcf04a421b3a4ef043acf93ea2f14cdcff7d9" exitCode=0 Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.688651 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" event={"ID":"72854c1e-5ae2-4ed6-9e50-ff3bccde2635","Type":"ContainerDied","Data":"b84a7ab7f1820bc9c15f1779999dcf04a421b3a4ef043acf93ea2f14cdcff7d9"} Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.692565 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"98e6fc91236bf9c4dd7a99909033583c8b64e10f67e3130a12a92936c6a6a8dd"} Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.703346 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"f45aa787fb1c206638720c3ec1a09cb5a4462bb90c0d9e77276f385c9f24e9bc"} Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.708073 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"957c48a64bf505f55933cfc9cf99bce461d72f89938aa38299be4b2e4c832fb2"} Aug 13 19:59:26 crc kubenswrapper[4183]: I0813 19:59:26.453310 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:26 crc kubenswrapper[4183]: I0813 19:59:26.453464 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:26 crc kubenswrapper[4183]: E0813 19:59:26.580144 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:26 crc kubenswrapper[4183]: E0813 19:59:26.580278 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:26 crc kubenswrapper[4183]: E0813 19:59:26.580401 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n6sqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8jhz6_openshift-marketplace(3f4dca86-e6ee-4ec9-8324-86aff960225e): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:26 crc kubenswrapper[4183]: E0813 19:59:26.580459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:59:27 crc kubenswrapper[4183]: I0813 19:59:27.442359 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:27 crc kubenswrapper[4183]: I0813 19:59:27.442744 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:27 crc kubenswrapper[4183]: I0813 19:59:27.749963 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" event={"ID":"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab","Type":"ContainerStarted","Data":"850160bdc6ea5ea83ea4c13388d6776a10113289f49f21b1ead74f152e5a1512"} Aug 13 19:59:27 crc kubenswrapper[4183]: I0813 19:59:27.761394 4183 generic.go:334] "Generic (PLEG): container finished" podID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerID="8d517c0fc52e9a1039f5e59cdbb937f13503c7a4c1c4b293a874285946b48f38" exitCode=0 Aug 13 19:59:27 crc kubenswrapper[4183]: I0813 19:59:27.761740 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerDied","Data":"8d517c0fc52e9a1039f5e59cdbb937f13503c7a4c1c4b293a874285946b48f38"} Aug 13 19:59:28 crc kubenswrapper[4183]: E0813 19:59:28.212953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:59:28 crc kubenswrapper[4183]: I0813 19:59:28.371432 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podStartSLOduration=35619914.37117759 podStartE2EDuration="9894h25m14.371177589s" podCreationTimestamp="2024-06-27 13:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 19:59:28.369383438 +0000 UTC m=+935.062048516" watchObservedRunningTime="2025-08-13 19:59:28.371177589 +0000 UTC m=+935.063842437" Aug 13 19:59:28 crc kubenswrapper[4183]: I0813 19:59:28.441302 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:28 crc kubenswrapper[4183]: I0813 19:59:28.441393 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.432333 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.433101 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.843299 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.844565 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.846243 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.846371 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Aug 13 19:59:30 crc kubenswrapper[4183]: I0813 19:59:30.435651 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:30 crc kubenswrapper[4183]: I0813 19:59:30.436305 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:31 crc kubenswrapper[4183]: E0813 19:59:31.325467 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:31 crc kubenswrapper[4183]: E0813 19:59:31.325538 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:31 crc kubenswrapper[4183]: E0813 19:59:31.325757 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tf29r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8s8pc_openshift-marketplace(c782cf62-a827-4677-b3c2-6f82c5f09cbb): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:31 crc kubenswrapper[4183]: E0813 19:59:31.325940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:59:31 crc kubenswrapper[4183]: I0813 19:59:31.436887 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:31 crc kubenswrapper[4183]: I0813 19:59:31.436986 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:31 crc kubenswrapper[4183]: I0813 19:59:31.669384 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 19:59:32 crc kubenswrapper[4183]: I0813 19:59:32.437963 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:32 crc kubenswrapper[4183]: I0813 19:59:32.438645 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.160183 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.259101 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "72854c1e-5ae2-4ed6-9e50-ff3bccde2635" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.259682 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir\") pod \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.260125 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.260634 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.290011 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "72854c1e-5ae2-4ed6-9e50-ff3bccde2635" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.362543 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.440531 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.440941 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.831200 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" event={"ID":"72854c1e-5ae2-4ed6-9e50-ff3bccde2635","Type":"ContainerDied","Data":"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877"} Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.831293 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.831374 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.211519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.211927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.343755 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.344580 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.344712 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-r7dbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rmwfn_openshift-marketplace(9ad279b4-d9dc-42a8-a1c8-a002bd063482): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.344764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.433338 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.433458 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.841116 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.841658 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.872051 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.872110 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.872615 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.872671 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.873283 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.875150 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.875369 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9" gracePeriod=2 Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.875904 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.875965 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.949438 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.949705 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.985305 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.985402 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.986513 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.987203 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.019257 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.019362 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.020556 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.020970 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.438605 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.438911 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.482606 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.751490 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.752102 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.751981 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.752228 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.752015 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.752299 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.769313 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.858535 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.860310 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9" exitCode=0 Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.860468 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9"} Aug 13 19:59:36 crc kubenswrapper[4183]: I0813 19:59:36.022392 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:36 crc kubenswrapper[4183]: I0813 19:59:36.022581 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:36 crc kubenswrapper[4183]: I0813 19:59:36.067663 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 19:59:36 crc kubenswrapper[4183]: I0813 19:59:36.432964 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:36 crc kubenswrapper[4183]: I0813 19:59:36.433261 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:37 crc kubenswrapper[4183]: E0813 19:59:37.215374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:59:37 crc kubenswrapper[4183]: I0813 19:59:37.447280 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:37 crc kubenswrapper[4183]: I0813 19:59:37.447479 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:38 crc kubenswrapper[4183]: E0813 19:59:38.215975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:59:38 crc kubenswrapper[4183]: I0813 19:59:38.435953 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:38 crc kubenswrapper[4183]: I0813 19:59:38.436590 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:38 crc kubenswrapper[4183]: I0813 19:59:38.932638 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"7342452c1232185e3cd70eb0d269743e495acdb67ac2358d63c1509e164b1377"} Aug 13 19:59:38 crc kubenswrapper[4183]: I0813 19:59:38.939102 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b"} Aug 13 19:59:38 crc kubenswrapper[4183]: I0813 19:59:38.940161 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:59:39 crc kubenswrapper[4183]: E0813 19:59:39.223292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:59:39 crc kubenswrapper[4183]: I0813 19:59:39.443735 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:39 crc kubenswrapper[4183]: I0813 19:59:39.444275 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:39 crc kubenswrapper[4183]: I0813 19:59:39.961542 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"ff87aa3e7fe778204f9c122934ebd1afdd2fc3dff3e2c7942831852cb04c7fc6"} Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.115312 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-vlbxv"] Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.116977 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" containerName="service-ca-controller" containerID="cri-o://47fe4a48f20f31be64ae9b101ef8f82942a11a5dc253da7cd8d82bea357cc9c7" gracePeriod=30 Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.447684 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.448063 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.943630 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-kk8kg"] Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.951272 4183 topology_manager.go:215] "Topology Admit Handler" podUID="e4a7de23-6134-4044-902a-0900dc04a501" podNamespace="openshift-service-ca" podName="service-ca-666f99b6f-kk8kg" Aug 13 19:59:40 crc kubenswrapper[4183]: E0813 19:59:40.951892 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" containerName="pruner" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.951963 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" containerName="pruner" Aug 13 19:59:40 crc kubenswrapper[4183]: E0813 19:59:40.952055 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" containerName="collect-profiles" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.952067 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" containerName="collect-profiles" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.952223 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" containerName="pruner" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.952247 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" containerName="collect-profiles" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.953316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.968896 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-79vsd" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.040960 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.073230 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.073359 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.073391 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.090682 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-kk8kg"] Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.178551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.178691 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.178721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.180394 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.253571 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.355614 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.447413 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.447506 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.611295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.003196 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f"} Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.005033 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.005239 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.005304 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:42 crc kubenswrapper[4183]: E0813 19:59:42.238198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.450760 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.451196 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.662137 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.677438 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.664605 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.677534 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.016536 4183 generic.go:334] "Generic (PLEG): container finished" podID="378552fd-5e53-4882-87ff-95f3d9198861" containerID="47fe4a48f20f31be64ae9b101ef8f82942a11a5dc253da7cd8d82bea357cc9c7" exitCode=0 Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.016921 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" event={"ID":"378552fd-5e53-4882-87ff-95f3d9198861","Type":"ContainerDied","Data":"47fe4a48f20f31be64ae9b101ef8f82942a11a5dc253da7cd8d82bea357cc9c7"} Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.018079 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.018295 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.439731 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.440334 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:44 crc kubenswrapper[4183]: E0813 19:59:44.213760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.441219 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.441340 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.594374 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.819339 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.871664 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.871873 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.872118 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.872210 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:44 crc kubenswrapper[4183]: E0813 19:59:44.874435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[registry-storage], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.949683 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.950412 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.298527 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.310054 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.441733 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.442634 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.649936 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.650038 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.649945 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.650244 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.884165 4183 patch_prober.go:28] interesting pod/authentication-operator-7cc7ff75d5-g9qv8 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.885001 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.948340 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:59:46 crc kubenswrapper[4183]: I0813 19:59:46.437716 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:46 crc kubenswrapper[4183]: I0813 19:59:46.438164 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:47 crc kubenswrapper[4183]: E0813 19:59:47.329990 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:47 crc kubenswrapper[4183]: E0813 19:59:47.330495 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:47 crc kubenswrapper[4183]: E0813 19:59:47.330660 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ncrf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7287f_openshift-marketplace(887d596e-c519-4bfa-af90-3edd9e1b2f0f): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:47 crc kubenswrapper[4183]: E0813 19:59:47.330729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:47 crc kubenswrapper[4183]: I0813 19:59:47.573828 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:47 crc kubenswrapper[4183]: I0813 19:59:47.573981 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:47 crc kubenswrapper[4183]: I0813 19:59:47.799589 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-kk8kg"] Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.080496 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" event={"ID":"e4a7de23-6134-4044-902a-0900dc04a501","Type":"ContainerStarted","Data":"c5069234e6bbbde190e466fb11df01a395209a382d2942184c3f52c3865e00ee"} Aug 13 19:59:48 crc kubenswrapper[4183]: E0813 19:59:48.334680 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:48 crc kubenswrapper[4183]: E0813 19:59:48.334954 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:48 crc kubenswrapper[4183]: E0813 19:59:48.335577 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n6sqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8jhz6_openshift-marketplace(3f4dca86-e6ee-4ec9-8324-86aff960225e): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:48 crc kubenswrapper[4183]: E0813 19:59:48.335720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.434752 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.435306 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.648599 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.649030 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.650082 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.650129 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.650161 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.651317 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.651352 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.652510 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b"} pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.652585 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" containerID="cri-o://f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b" gracePeriod=30 Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.029359 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]log ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]etcd ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 19:59:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.029884 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.123308 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]log ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]etcd ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 19:59:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.123512 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.139181 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" event={"ID":"378552fd-5e53-4882-87ff-95f3d9198861","Type":"ContainerDied","Data":"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039"} Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.139746 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.164685 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.194471 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"378552fd-5e53-4882-87ff-95f3d9198861\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.195109 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"378552fd-5e53-4882-87ff-95f3d9198861\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.195253 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"378552fd-5e53-4882-87ff-95f3d9198861\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.202571 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "378552fd-5e53-4882-87ff-95f3d9198861" (UID: "378552fd-5e53-4882-87ff-95f3d9198861"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.208273 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf" (OuterVolumeSpecName: "kube-api-access-d7ntf") pod "378552fd-5e53-4882-87ff-95f3d9198861" (UID: "378552fd-5e53-4882-87ff-95f3d9198861"). InnerVolumeSpecName "kube-api-access-d7ntf". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.220765 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key" (OuterVolumeSpecName: "signing-key") pod "378552fd-5e53-4882-87ff-95f3d9198861" (UID: "378552fd-5e53-4882-87ff-95f3d9198861"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.229296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.229484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.297207 4183 reconciler_common.go:300] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.297611 4183 reconciler_common.go:300] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.297734 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.360235 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.360331 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.360594 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ptdrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f4jkp_openshift-marketplace(4092a9f8-5acc-4932-9e90-ef962eeb301a): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.360647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.443457 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.444219 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.879979 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]log ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]etcd ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 19:59:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.880081 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:50 crc kubenswrapper[4183]: I0813 19:59:50.177107 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:59:50 crc kubenswrapper[4183]: I0813 19:59:50.177878 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"34cf17f4d863a4ac8e304ee5c662018d813019d268cbb7022afa9bdac6b80fbd"} Aug 13 19:59:50 crc kubenswrapper[4183]: I0813 19:59:50.441573 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:50 crc kubenswrapper[4183]: I0813 19:59:50.443668 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:51 crc kubenswrapper[4183]: E0813 19:59:51.212575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:59:51 crc kubenswrapper[4183]: I0813 19:59:51.440975 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:51 crc kubenswrapper[4183]: I0813 19:59:51.441203 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:51 crc kubenswrapper[4183]: E0813 19:59:51.468060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[registry-storage], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" Aug 13 19:59:51 crc kubenswrapper[4183]: I0813 19:59:51.987666 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6ff78978b4-q4vv8"] Aug 13 19:59:51 crc kubenswrapper[4183]: I0813 19:59:51.988080 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" containerID="cri-o://5a16f80522246f66629d4cfcf1e317f7a3db9cc08045c713b73797a46e8882df" gracePeriod=30 Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.198111 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" event={"ID":"e4a7de23-6134-4044-902a-0900dc04a501","Type":"ContainerStarted","Data":"5ca33b1d9111046b71500c2532324037d0682ac3c0fabe705b5bd17f91afa552"} Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.198164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.409457 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-vlbxv"] Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.422430 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5"] Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.427195 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" containerID="cri-o://aa3bd53db5b871b1e7ccc9029bf14c3e8c4163839c67447dd344680fd1080e59" gracePeriod=30 Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.437009 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.437154 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.486875 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-vlbxv"] Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.649433 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.649971 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.845735 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podStartSLOduration=12.845670263 podStartE2EDuration="12.845670263s" podCreationTimestamp="2025-08-13 19:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 19:59:52.781564366 +0000 UTC m=+959.474229104" watchObservedRunningTime="2025-08-13 19:59:52.845670263 +0000 UTC m=+959.538335011" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.219976 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="378552fd-5e53-4882-87ff-95f3d9198861" path="/var/lib/kubelet/pods/378552fd-5e53-4882-87ff-95f3d9198861/volumes" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.223157 4183 generic.go:334] "Generic (PLEG): container finished" podID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerID="5a16f80522246f66629d4cfcf1e317f7a3db9cc08045c713b73797a46e8882df" exitCode=0 Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.223289 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" event={"ID":"87df87f4-ba66-4137-8e41-1fa632ad4207","Type":"ContainerDied","Data":"5a16f80522246f66629d4cfcf1e317f7a3db9cc08045c713b73797a46e8882df"} Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.228417 4183 generic.go:334] "Generic (PLEG): container finished" podID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerID="aa3bd53db5b871b1e7ccc9029bf14c3e8c4163839c67447dd344680fd1080e59" exitCode=0 Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.228543 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" event={"ID":"af6b67a3-a2bd-4051-9adc-c208a5a65d79","Type":"ContainerDied","Data":"aa3bd53db5b871b1e7ccc9029bf14c3e8c4163839c67447dd344680fd1080e59"} Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.437134 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.437248 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.854176 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.920999 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"87df87f4-ba66-4137-8e41-1fa632ad4207\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.921104 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"87df87f4-ba66-4137-8e41-1fa632ad4207\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.921134 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"87df87f4-ba66-4137-8e41-1fa632ad4207\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.921170 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"87df87f4-ba66-4137-8e41-1fa632ad4207\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.921195 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"87df87f4-ba66-4137-8e41-1fa632ad4207\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.922384 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "87df87f4-ba66-4137-8e41-1fa632ad4207" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.922508 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca" (OuterVolumeSpecName: "client-ca") pod "87df87f4-ba66-4137-8e41-1fa632ad4207" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.923655 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config" (OuterVolumeSpecName: "config") pod "87df87f4-ba66-4137-8e41-1fa632ad4207" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.969111 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57" (OuterVolumeSpecName: "kube-api-access-pzb57") pod "87df87f4-ba66-4137-8e41-1fa632ad4207" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207"). InnerVolumeSpecName "kube-api-access-pzb57". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.969275 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "87df87f4-ba66-4137-8e41-1fa632ad4207" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.023502 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.023541 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.023554 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.023573 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.023585 4183 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.238042 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" event={"ID":"87df87f4-ba66-4137-8e41-1fa632ad4207","Type":"ContainerDied","Data":"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f"} Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.238184 4183 scope.go:117] "RemoveContainer" containerID="5a16f80522246f66629d4cfcf1e317f7a3db9cc08045c713b73797a46e8882df" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.238294 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.436856 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.437289 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.642583 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6ff78978b4-q4vv8"] Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.694196 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.694343 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.694387 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.694444 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.694472 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.698711 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.709297 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6ff78978b4-q4vv8"] Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.718283 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.844327 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.844401 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.844479 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.844546 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.846529 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca" (OuterVolumeSpecName: "client-ca") pod "af6b67a3-a2bd-4051-9adc-c208a5a65d79" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.847339 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config" (OuterVolumeSpecName: "config") pod "af6b67a3-a2bd-4051-9adc-c208a5a65d79" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.861274 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn" (OuterVolumeSpecName: "kube-api-access-hpzhn") pod "af6b67a3-a2bd-4051-9adc-c208a5a65d79" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79"). InnerVolumeSpecName "kube-api-access-hpzhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.869651 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "af6b67a3-a2bd-4051-9adc-c208a5a65d79" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.871983 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.872086 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.876100 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.876212 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.896218 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]log ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]etcd ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 19:59:54 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 19:59:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.896445 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.947258 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.947475 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.947494 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.947512 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.953125 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.953213 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.267619 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.291160 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" path="/var/lib/kubelet/pods/87df87f4-ba66-4137-8e41-1fa632ad4207/volumes" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.294870 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" event={"ID":"af6b67a3-a2bd-4051-9adc-c208a5a65d79","Type":"ContainerDied","Data":"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf"} Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.294955 4183 scope.go:117] "RemoveContainer" containerID="aa3bd53db5b871b1e7ccc9029bf14c3e8c4163839c67447dd344680fd1080e59" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331335 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331506 4183 topology_manager.go:215] "Topology Admit Handler" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" podNamespace="openshift-controller-manager" podName="controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.331700 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331717 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.331736 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331745 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.331763 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="378552fd-5e53-4882-87ff-95f3d9198861" containerName="service-ca-controller" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331814 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="378552fd-5e53-4882-87ff-95f3d9198861" containerName="service-ca-controller" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331971 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331991 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="378552fd-5e53-4882-87ff-95f3d9198861" containerName="service-ca-controller" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.332008 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.332662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.347326 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.347460 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.347597 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tf29r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8s8pc_openshift-marketplace(c782cf62-a827-4677-b3c2-6f82c5f09cbb): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.347655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.367304 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.367481 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.367520 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.367571 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.367684 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvvgt\" (UniqueName: \"kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.445246 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.445358 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.468643 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.468993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rvvgt\" (UniqueName: \"kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.469037 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.469071 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.469106 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.648929 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.649094 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.692567 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.694217 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.696064 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.700916 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.701464 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.711293 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.711751 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.791361 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.012351 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.152557 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.166000 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.177683 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.435947 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.436149 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.471761 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.471976 4183 topology_manager.go:215] "Topology Admit Handler" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.475959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.612404 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.612571 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njx72\" (UniqueName: \"kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.612630 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.613039 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.679475 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.714217 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.714382 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-njx72\" (UniqueName: \"kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.714435 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.714613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.847427 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.847823 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.848006 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.857636 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.923763 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvvgt\" (UniqueName: \"kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.000386 4183 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.020516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.042895 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.052159 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.059066 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.070227 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.115680 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-njx72\" (UniqueName: \"kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.165521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.173370 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.209604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:59:57 crc kubenswrapper[4183]: E0813 19:59:57.219465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.437713 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.437919 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.929510 4183 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-08-13T19:59:57.000640771Z","Handler":null,"Name":""} Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.085657 4183 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.085937 4183 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.115602 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": read tcp 10.217.0.2:40914->10.217.0.23:8443: read: connection reset by peer" start-of-body= Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.115726 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": read tcp 10.217.0.2:40914->10.217.0.23:8443: read: connection reset by peer" Aug 13 19:59:58 crc kubenswrapper[4183]: E0813 19:59:58.213433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.357685 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"42d711544e11c05fc086e8f0c7a21cc883bc678e9e7c9221d490bdabc9cffe87"} Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.360293 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/0.log" Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.360735 4183 generic.go:334] "Generic (PLEG): container finished" podID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerID="f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b" exitCode=255 Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.360869 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerDied","Data":"f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b"} Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.442113 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.442250 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:59 crc kubenswrapper[4183]: E0813 19:59:59.236509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:59:59 crc kubenswrapper[4183]: I0813 19:59:59.435876 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:59 crc kubenswrapper[4183]: I0813 19:59:59.436152 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:59 crc kubenswrapper[4183]: I0813 19:59:59.866426 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:59:59 crc kubenswrapper[4183]: I0813 19:59:59.909397 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.027588 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 20:00:00 crc kubenswrapper[4183]: W0813 20:00:00.070724 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16f68e98_a8f9_417a_b92b_37bfd7b11e01.slice/crio-4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54 WatchSource:0}: Error finding container 4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54: Status 404 returned error can't find the container with id 4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54 Aug 13 20:00:00 crc kubenswrapper[4183]: E0813 20:00:00.219221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.430252 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2"] Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.430382 4183 topology_manager.go:215] "Topology Admit Handler" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.431281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.451065 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:00 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.451160 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.481406 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" event={"ID":"16f68e98-a8f9-417a-b92b-37bfd7b11e01","Type":"ContainerStarted","Data":"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54"} Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.517054 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.517335 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.563374 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctj8c\" (UniqueName: \"kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.563523 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.563608 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.587423 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2"] Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.650425 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.650573 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.672066 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ctj8c\" (UniqueName: \"kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.672139 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.672199 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.681316 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.767383 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.831735 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctj8c\" (UniqueName: \"kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:01 crc kubenswrapper[4183]: E0813 20:00:01.214016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 20:00:01 crc kubenswrapper[4183]: E0813 20:00:01.354370 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 20:00:01 crc kubenswrapper[4183]: E0813 20:00:01.354432 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 20:00:01 crc kubenswrapper[4183]: E0813 20:00:01.354548 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-r7dbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rmwfn_openshift-marketplace(9ad279b4-d9dc-42a8-a1c8-a002bd063482): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 20:00:01 crc kubenswrapper[4183]: E0813 20:00:01.354595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 20:00:01 crc kubenswrapper[4183]: I0813 20:00:01.435662 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:01 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:01 crc kubenswrapper[4183]: I0813 20:00:01.437439 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:01 crc kubenswrapper[4183]: I0813 20:00:01.694507 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 20:00:02 crc kubenswrapper[4183]: E0813 20:00:02.212677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 20:00:02 crc kubenswrapper[4183]: I0813 20:00:02.434541 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:02 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:02 crc kubenswrapper[4183]: I0813 20:00:02.434647 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:02 crc kubenswrapper[4183]: I0813 20:00:02.494456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:02 crc kubenswrapper[4183]: I0813 20:00:02.683346 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" event={"ID":"83bf0764-e80c-490b-8d3c-3cf626fdb233","Type":"ContainerStarted","Data":"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a"} Aug 13 20:00:03 crc kubenswrapper[4183]: I0813 20:00:03.435374 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:03 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:03 crc kubenswrapper[4183]: I0813 20:00:03.435498 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:03 crc kubenswrapper[4183]: I0813 20:00:03.648682 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:00:03 crc kubenswrapper[4183]: I0813 20:00:03.649216 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.435246 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:04 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.435580 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.872257 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.872991 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.873061 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.872265 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.873415 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.873953 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.873982 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.875079 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.875131 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f" gracePeriod=2 Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.025423 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.026036 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.396987 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2"] Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.434620 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:05 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.435185 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.716564 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f" exitCode=0 Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.716715 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f"} Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.717008 4183 scope.go:117] "RemoveContainer" containerID="b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9" Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.719698 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" event={"ID":"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27","Type":"ContainerStarted","Data":"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348"} Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.435459 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:06 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.436133 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.650037 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.650225 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.730625 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" event={"ID":"83bf0764-e80c-490b-8d3c-3cf626fdb233","Type":"ContainerStarted","Data":"d5c73235c66ef57fa18c4f490c290086bd39214c316a1e20bac3ddba0b9ab23c"} Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.731126 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.734101 4183 patch_prober.go:28] interesting pod/route-controller-manager-5b77f9fd48-hb8xt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.734194 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.735317 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" event={"ID":"16f68e98-a8f9-417a-b92b-37bfd7b11e01","Type":"ContainerStarted","Data":"3adbf9773c9dee772e1fae33ef3bfea1611715fe8502455203e764d46595a8bc"} Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.741610 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/0.log" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.742420 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b"} Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.743332 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.807511 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" podStartSLOduration=12.807457808 podStartE2EDuration="12.807457808s" podCreationTimestamp="2025-08-13 19:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:06.802006483 +0000 UTC m=+973.494671341" watchObservedRunningTime="2025-08-13 20:00:06.807457808 +0000 UTC m=+973.500122546" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.823476 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.823671 4183 topology_manager.go:215] "Topology Admit Handler" podUID="a0453d24-e872-43af-9e7a-86227c26d200" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.824558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.830140 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dl9g2" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.830723 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.843831 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.844033 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.857413 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.946207 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.946359 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.946558 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.951349 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" podStartSLOduration=13.9512997 podStartE2EDuration="13.9512997s" podCreationTimestamp="2025-08-13 19:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:06.947395608 +0000 UTC m=+973.640060666" watchObservedRunningTime="2025-08-13 20:00:06.9512997 +0000 UTC m=+973.643964418" Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.023143 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.049629 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.059444 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.444468 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:07 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.444561 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.597730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:08 crc kubenswrapper[4183]: I0813 20:00:08.042742 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 20:00:08 crc kubenswrapper[4183]: I0813 20:00:08.440824 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:08 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:08 crc kubenswrapper[4183]: I0813 20:00:08.441453 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:09 crc kubenswrapper[4183]: E0813 20:00:09.211359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.434143 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.434352 4183 topology_manager.go:215] "Topology Admit Handler" podUID="227e3650-2a85-4229-8099-bb53972635b2" podNamespace="openshift-kube-controller-manager" podName="installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.435408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.436985 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:09 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.437129 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.597139 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.597291 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.597420 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.699065 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.699153 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.699205 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.699229 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.699398 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.137030 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:00:10 crc kubenswrapper[4183]: E0813 20:00:10.214874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.218068 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.346719 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.444256 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:10 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.447014 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.514376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.815665 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a"} Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.818629 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.818751 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.818898 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.832568 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" event={"ID":"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27","Type":"ContainerStarted","Data":"f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786"} Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.408692 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.409538 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" containerName="controller-manager" containerID="cri-o://3adbf9773c9dee772e1fae33ef3bfea1611715fe8502455203e764d46595a8bc" gracePeriod=30 Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.446038 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:11 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.446320 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.657414 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.657694 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerName="route-controller-manager" containerID="cri-o://d5c73235c66ef57fa18c4f490c290086bd39214c316a1e20bac3ddba0b9ab23c" gracePeriod=30 Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.839995 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.840697 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:12 crc kubenswrapper[4183]: E0813 20:00:12.214330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 20:00:12 crc kubenswrapper[4183]: E0813 20:00:12.214469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 20:00:12 crc kubenswrapper[4183]: E0813 20:00:12.214595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.432418 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:12 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.432950 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.827582 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Aug 13 20:00:12 crc kubenswrapper[4183]: W0813 20:00:12.844932 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda0453d24_e872_43af_9e7a_86227c26d200.slice/crio-beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319 WatchSource:0}: Error finding container beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319: Status 404 returned error can't find the container with id beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319 Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.874373 4183 generic.go:334] "Generic (PLEG): container finished" podID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerID="d5c73235c66ef57fa18c4f490c290086bd39214c316a1e20bac3ddba0b9ab23c" exitCode=0 Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.874577 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" event={"ID":"83bf0764-e80c-490b-8d3c-3cf626fdb233","Type":"ContainerDied","Data":"d5c73235c66ef57fa18c4f490c290086bd39214c316a1e20bac3ddba0b9ab23c"} Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.882748 4183 generic.go:334] "Generic (PLEG): container finished" podID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" containerID="3adbf9773c9dee772e1fae33ef3bfea1611715fe8502455203e764d46595a8bc" exitCode=0 Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.883140 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" event={"ID":"16f68e98-a8f9-417a-b92b-37bfd7b11e01","Type":"ContainerDied","Data":"3adbf9773c9dee772e1fae33ef3bfea1611715fe8502455203e764d46595a8bc"} Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.884751 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.891048 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.077103 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" podStartSLOduration=13.077002107 podStartE2EDuration="13.077002107s" podCreationTimestamp="2025-08-13 20:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:13.063204943 +0000 UTC m=+979.755870041" watchObservedRunningTime="2025-08-13 20:00:13.077002107 +0000 UTC m=+979.769667125" Aug 13 20:00:13 crc kubenswrapper[4183]: E0813 20:00:13.215023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.415704 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.444931 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-765b47f944-n2lhl"] Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.453029 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:13 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.453140 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:13 crc kubenswrapper[4183]: W0813 20:00:13.496289 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod227e3650_2a85_4229_8099_bb53972635b2.slice/crio-ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef WatchSource:0}: Error finding container ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef: Status 404 returned error can't find the container with id ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.942064 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-9-crc" event={"ID":"227e3650-2a85-4229-8099-bb53972635b2","Type":"ContainerStarted","Data":"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef"} Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.944612 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a0453d24-e872-43af-9e7a-86227c26d200","Type":"ContainerStarted","Data":"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319"} Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.967043 4183 generic.go:334] "Generic (PLEG): container finished" podID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" containerID="f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786" exitCode=0 Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.967120 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" event={"ID":"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27","Type":"ContainerDied","Data":"f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786"} Aug 13 20:00:14 crc kubenswrapper[4183]: E0813 20:00:14.233752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.437693 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:14 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.438231 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.871953 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.873447 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.872215 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.874133 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.949658 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.949746 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.976380 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" event={"ID":"16f68e98-a8f9-417a-b92b-37bfd7b11e01","Type":"ContainerDied","Data":"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54"} Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.976449 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.002072 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.103994 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvvgt\" (UniqueName: \"kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt\") pod \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.104141 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca\") pod \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.104251 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config\") pod \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.104314 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles\") pod \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.104408 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert\") pod \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.105448 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca" (OuterVolumeSpecName: "client-ca") pod "16f68e98-a8f9-417a-b92b-37bfd7b11e01" (UID: "16f68e98-a8f9-417a-b92b-37bfd7b11e01"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.106161 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config" (OuterVolumeSpecName: "config") pod "16f68e98-a8f9-417a-b92b-37bfd7b11e01" (UID: "16f68e98-a8f9-417a-b92b-37bfd7b11e01"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.106630 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "16f68e98-a8f9-417a-b92b-37bfd7b11e01" (UID: "16f68e98-a8f9-417a-b92b-37bfd7b11e01"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.144033 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16f68e98-a8f9-417a-b92b-37bfd7b11e01" (UID: "16f68e98-a8f9-417a-b92b-37bfd7b11e01"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.164398 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt" (OuterVolumeSpecName: "kube-api-access-rvvgt") pod "16f68e98-a8f9-417a-b92b-37bfd7b11e01" (UID: "16f68e98-a8f9-417a-b92b-37bfd7b11e01"). InnerVolumeSpecName "kube-api-access-rvvgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.207183 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.207266 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rvvgt\" (UniqueName: \"kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.207297 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.207317 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.207334 4183 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.440088 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:15 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.440501 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.687573 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.818880 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njx72\" (UniqueName: \"kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72\") pod \"83bf0764-e80c-490b-8d3c-3cf626fdb233\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.819048 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca\") pod \"83bf0764-e80c-490b-8d3c-3cf626fdb233\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.819085 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config\") pod \"83bf0764-e80c-490b-8d3c-3cf626fdb233\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.819178 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert\") pod \"83bf0764-e80c-490b-8d3c-3cf626fdb233\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.821131 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca" (OuterVolumeSpecName: "client-ca") pod "83bf0764-e80c-490b-8d3c-3cf626fdb233" (UID: "83bf0764-e80c-490b-8d3c-3cf626fdb233"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.821665 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config" (OuterVolumeSpecName: "config") pod "83bf0764-e80c-490b-8d3c-3cf626fdb233" (UID: "83bf0764-e80c-490b-8d3c-3cf626fdb233"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.829234 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72" (OuterVolumeSpecName: "kube-api-access-njx72") pod "83bf0764-e80c-490b-8d3c-3cf626fdb233" (UID: "83bf0764-e80c-490b-8d3c-3cf626fdb233"). InnerVolumeSpecName "kube-api-access-njx72". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.839170 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "83bf0764-e80c-490b-8d3c-3cf626fdb233" (UID: "83bf0764-e80c-490b-8d3c-3cf626fdb233"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.920862 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-njx72\" (UniqueName: \"kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.920931 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.920954 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.920969 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.988923 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" event={"ID":"83bf0764-e80c-490b-8d3c-3cf626fdb233","Type":"ContainerDied","Data":"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a"} Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.988936 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.988992 4183 scope.go:117] "RemoveContainer" containerID="d5c73235c66ef57fa18c4f490c290086bd39214c316a1e20bac3ddba0b9ab23c" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.988982 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.341272 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.432894 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctj8c\" (UniqueName: \"kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c\") pod \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.433074 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume\") pod \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.433126 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume\") pod \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.434291 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume" (OuterVolumeSpecName: "config-volume") pod "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" (UID: "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.439630 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c" (OuterVolumeSpecName: "kube-api-access-ctj8c") pod "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" (UID: "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27"). InnerVolumeSpecName "kube-api-access-ctj8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.446259 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:16 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.446463 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" (UID: "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.446488 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.543389 4183 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.543514 4183 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.543544 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ctj8c\" (UniqueName: \"kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.006121 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a0453d24-e872-43af-9e7a-86227c26d200","Type":"ContainerStarted","Data":"3e7eb9892d5a94b55021884eb7d6b9f29402769ffac497c2b86edb6618a7ef4d"} Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.013564 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" event={"ID":"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27","Type":"ContainerDied","Data":"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348"} Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.013619 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.013743 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:17 crc kubenswrapper[4183]: E0813 20:00:17.213161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.337281 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.337432 4183 topology_manager.go:215] "Topology Admit Handler" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" podNamespace="openshift-kube-apiserver" podName="installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: E0813 20:00:17.337602 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" containerName="controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.337620 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" containerName="controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: E0813 20:00:17.337640 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" containerName="collect-profiles" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.337653 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" containerName="collect-profiles" Aug 13 20:00:17 crc kubenswrapper[4183]: E0813 20:00:17.337671 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerName="route-controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.337716 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerName="route-controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.338220 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" containerName="collect-profiles" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.338243 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerName="route-controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.338255 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" containerName="controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.338641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.383506 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.384930 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.385493 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.404515 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.412347 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-4kgh8" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.448936 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:17 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.449427 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.486887 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.487010 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.487142 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.487243 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.487684 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.520086 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.588519 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.588681 4183 topology_manager.go:215] "Topology Admit Handler" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.589416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.627075 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.627262 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.627423 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.627961 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.628068 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.628206 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.697161 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qgvb\" (UniqueName: \"kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.697279 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.697345 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.697383 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.798571 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.798655 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.798729 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9qgvb\" (UniqueName: \"kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.798921 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.801515 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.802501 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.847371 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.914268 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.945291 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.077101 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-9-crc" event={"ID":"227e3650-2a85-4229-8099-bb53972635b2","Type":"ContainerStarted","Data":"1bbed3b469cb62a0e76b6e9718249f34f00007dc9f9e6dcd22b346fb357ece99"} Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.112972 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.129067 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.155154 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qgvb\" (UniqueName: \"kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.213252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.263518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.464305 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:18 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.464656 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.806627 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.806961 4183 topology_manager.go:215] "Topology Admit Handler" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" podNamespace="openshift-console" podName="console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.807928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.869628 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-ng44q" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.937734 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.937945 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjq9b\" (UniqueName: \"kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.938025 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.938067 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.938098 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.938179 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.938207 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.951491 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.039936 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040041 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040075 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040179 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040204 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040248 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040287 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjq9b\" (UniqueName: \"kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.043475 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.057114 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.058261 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.062310 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.074712 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.088099 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.102692 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.203213 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.293347 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" path="/var/lib/kubelet/pods/16f68e98-a8f9-417a-b92b-37bfd7b11e01/volumes" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.308462 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" path="/var/lib/kubelet/pods/83bf0764-e80c-490b-8d3c-3cf626fdb233/volumes" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.426015 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjq9b\" (UniqueName: \"kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.441234 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:19 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.441519 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.537411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.223065 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.223268 4183 topology_manager.go:215] "Topology Admit Handler" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" podNamespace="openshift-controller-manager" podName="controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.224825 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.230713 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-9-crc" podStartSLOduration=11.230656964 podStartE2EDuration="11.230656964s" podCreationTimestamp="2025-08-13 20:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:20.208394449 +0000 UTC m=+986.901059297" watchObservedRunningTime="2025-08-13 20:00:20.230656964 +0000 UTC m=+986.923321692" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.253745 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.254245 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.254530 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.254737 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.255015 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.259287 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.288758 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.350073 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.378405 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w8t6\" (UniqueName: \"kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.378560 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.378654 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.378685 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.378717 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.456205 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:20 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.456309 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.487569 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=14.487510238 podStartE2EDuration="14.487510238s" podCreationTimestamp="2025-08-13 20:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:20.487207379 +0000 UTC m=+987.179872367" watchObservedRunningTime="2025-08-13 20:00:20.487510238 +0000 UTC m=+987.180175056" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.489643 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.489816 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.489878 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.489918 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.489970 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5w8t6\" (UniqueName: \"kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.494680 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.504770 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.567351 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.568035 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.632650 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w8t6\" (UniqueName: \"kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.870208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:21 crc kubenswrapper[4183]: I0813 20:00:21.163475 4183 generic.go:334] "Generic (PLEG): container finished" podID="a0453d24-e872-43af-9e7a-86227c26d200" containerID="3e7eb9892d5a94b55021884eb7d6b9f29402769ffac497c2b86edb6618a7ef4d" exitCode=0 Aug 13 20:00:21 crc kubenswrapper[4183]: I0813 20:00:21.163712 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a0453d24-e872-43af-9e7a-86227c26d200","Type":"ContainerDied","Data":"3e7eb9892d5a94b55021884eb7d6b9f29402769ffac497c2b86edb6618a7ef4d"} Aug 13 20:00:21 crc kubenswrapper[4183]: E0813 20:00:21.234485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 20:00:21 crc kubenswrapper[4183]: I0813 20:00:21.442436 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:21 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:21 crc kubenswrapper[4183]: I0813 20:00:21.442512 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:22 crc kubenswrapper[4183]: I0813 20:00:22.447411 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:22 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:22 crc kubenswrapper[4183]: I0813 20:00:22.447973 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:23 crc kubenswrapper[4183]: E0813 20:00:23.214650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 20:00:23 crc kubenswrapper[4183]: E0813 20:00:23.214767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 20:00:23 crc kubenswrapper[4183]: I0813 20:00:23.442020 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:23 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:23 crc kubenswrapper[4183]: I0813 20:00:23.442109 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:23 crc kubenswrapper[4183]: I0813 20:00:23.817439 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:23 crc kubenswrapper[4183]: W0813 20:00:23.846698 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1713e8bc_bab0_49a8_8618_9ded2e18906c.slice/crio-1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715 WatchSource:0}: Error finding container 1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715: Status 404 returned error can't find the container with id 1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715 Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.033654 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.086096 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access\") pod \"a0453d24-e872-43af-9e7a-86227c26d200\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.086222 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir\") pod \"a0453d24-e872-43af-9e7a-86227c26d200\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.086428 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a0453d24-e872-43af-9e7a-86227c26d200" (UID: "a0453d24-e872-43af-9e7a-86227c26d200"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.086602 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.096156 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a0453d24-e872-43af-9e7a-86227c26d200" (UID: "a0453d24-e872-43af-9e7a-86227c26d200"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.188626 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.229861 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a0453d24-e872-43af-9e7a-86227c26d200","Type":"ContainerDied","Data":"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319"} Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.229921 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.229949 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.237458 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" event={"ID":"1713e8bc-bab0-49a8-8618-9ded2e18906c","Type":"ContainerStarted","Data":"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715"} Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.274326 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.278576 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.293300 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.460858 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:24 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.460981 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.804322 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.871691 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.871820 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.873620 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.873700 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.949736 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.949926 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.065141 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:25 crc kubenswrapper[4183]: E0813 20:00:25.213757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.242361 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-585546dd8b-v5m4t"] Aug 13 20:00:25 crc kubenswrapper[4183]: E0813 20:00:25.243561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[registry-storage], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.249038 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" event={"ID":"1713e8bc-bab0-49a8-8618-9ded2e18906c","Type":"ContainerStarted","Data":"6f473c92f07e1c47edf5b8e65134aeb43315eb0c72514a8b4132da92f81b1fe5"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.251370 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" event={"ID":"a560ec6a-586f-403c-a08e-e3a76fa1b7fd","Type":"ContainerStarted","Data":"7772cfe77a9084a8b1da62b48709afa4195652cf6fbe8e33fe7a5414394f71e7"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.251428 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" event={"ID":"a560ec6a-586f-403c-a08e-e3a76fa1b7fd","Type":"ContainerStarted","Data":"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.251569 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerName="controller-manager" containerID="cri-o://7772cfe77a9084a8b1da62b48709afa4195652cf6fbe8e33fe7a5414394f71e7" gracePeriod=30 Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.252282 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.258232 4183 patch_prober.go:28] interesting pod/controller-manager-67685c4459-7p2h8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.258715 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.262914 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerStarted","Data":"bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.262974 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerStarted","Data":"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.271544 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ad657a4-8b02-4373-8d0d-b0e25345dc90","Type":"ContainerStarted","Data":"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.442476 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:25 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.442661 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.512090 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-console/console-5d9678894c-wx62n" podStartSLOduration=7.512029209 podStartE2EDuration="7.512029209s" podCreationTimestamp="2025-08-13 20:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:25.508307233 +0000 UTC m=+992.200972071" watchObservedRunningTime="2025-08-13 20:00:25.512029209 +0000 UTC m=+992.204694147" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.563868 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" podStartSLOduration=14.563758574 podStartE2EDuration="14.563758574s" podCreationTimestamp="2025-08-13 20:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:25.56185584 +0000 UTC m=+992.254520888" watchObservedRunningTime="2025-08-13 20:00:25.563758574 +0000 UTC m=+992.256423352" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.794333 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-75779c45fd-v2j2v"] Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.794578 4183 topology_manager.go:215] "Topology Admit Handler" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" podNamespace="openshift-image-registry" podName="image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: E0813 20:00:25.797195 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a0453d24-e872-43af-9e7a-86227c26d200" containerName="pruner" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.797239 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0453d24-e872-43af-9e7a-86227c26d200" containerName="pruner" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.797633 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0453d24-e872-43af-9e7a-86227c26d200" containerName="pruner" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.800477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.946364 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.948726 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.949007 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.949154 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.949303 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.949605 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.951486 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.959620 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" podStartSLOduration=14.95954932 podStartE2EDuration="14.95954932s" podCreationTimestamp="2025-08-13 20:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:25.744372094 +0000 UTC m=+992.437037032" watchObservedRunningTime="2025-08-13 20:00:25.95954932 +0000 UTC m=+992.652214048" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.960208 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-75779c45fd-v2j2v"] Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053304 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053568 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053600 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053652 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053699 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053763 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.056353 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.057262 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.060476 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.072588 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.077750 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.095379 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.117737 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: E0813 20:00:26.226722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 20:00:26 crc kubenswrapper[4183]: E0813 20:00:26.240942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.324629 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-67685c4459-7p2h8_a560ec6a-586f-403c-a08e-e3a76fa1b7fd/controller-manager/0.log" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.328329 4183 generic.go:334] "Generic (PLEG): container finished" podID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerID="7772cfe77a9084a8b1da62b48709afa4195652cf6fbe8e33fe7a5414394f71e7" exitCode=2 Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.344900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.345270 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerName="route-controller-manager" containerID="cri-o://6f473c92f07e1c47edf5b8e65134aeb43315eb0c72514a8b4132da92f81b1fe5" gracePeriod=30 Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.350498 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" event={"ID":"a560ec6a-586f-403c-a08e-e3a76fa1b7fd","Type":"ContainerDied","Data":"7772cfe77a9084a8b1da62b48709afa4195652cf6fbe8e33fe7a5414394f71e7"} Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.352716 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.398176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.459657 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.460573 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khtlk\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.461266 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.461434 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.478939 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.479169 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.479315 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.479434 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.480475 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.480720 4183 reconciler_common.go:300] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.484328 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.478755 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.545169 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk" (OuterVolumeSpecName: "kube-api-access-khtlk") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "kube-api-access-khtlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.578325 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.579830 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.589642 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-khtlk\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.589706 4183 reconciler_common.go:300] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.589728 4183 reconciler_common.go:300] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.589743 4183 reconciler_common.go:300] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.589861 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.607624 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.693416 4183 reconciler_common.go:300] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.719467 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (OuterVolumeSpecName: "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97". PluginName "kubernetes.io/csi", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.743450 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:26 crc kubenswrapper[4183]: [+]has-synced ok Aug 13 20:00:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:26 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.743560 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.795611 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.841473 4183 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.842278 4183 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ea5f9a7192af1960ec8c50a86fd2d9a756dbf85695798868f611e04a03ec009/globalmount\"" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.857663 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.959176 4183 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podaf6b67a3-a2bd-4051-9adc-c208a5a65d79"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podaf6b67a3-a2bd-4051-9adc-c208a5a65d79] : Timed out while waiting for systemd to remove kubepods-burstable-podaf6b67a3_a2bd_4051_9adc_c208a5a65d79.slice" Aug 13 20:00:26 crc kubenswrapper[4183]: E0813 20:00:26.959342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable podaf6b67a3-a2bd-4051-9adc-c208a5a65d79] : unable to destroy cgroup paths for cgroup [kubepods burstable podaf6b67a3-a2bd-4051-9adc-c208a5a65d79] : Timed out while waiting for systemd to remove kubepods-burstable-podaf6b67a3_a2bd_4051_9adc_c208a5a65d79.slice" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 20:00:27 crc kubenswrapper[4183]: E0813 20:00:27.229118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.381153 4183 generic.go:334] "Generic (PLEG): container finished" podID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerID="6f473c92f07e1c47edf5b8e65134aeb43315eb0c72514a8b4132da92f81b1fe5" exitCode=0 Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.386049 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.381490 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" event={"ID":"1713e8bc-bab0-49a8-8618-9ded2e18906c","Type":"ContainerDied","Data":"6f473c92f07e1c47edf5b8e65134aeb43315eb0c72514a8b4132da92f81b1fe5"} Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.390105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.439866 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.444650 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.455221 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5"] Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.503648 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5"] Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.530253 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.609055 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.614083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.622491 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.623115 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-585546dd8b-v5m4t"] Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.640968 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-585546dd8b-v5m4t"] Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.925967 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.964493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.216388 4183 patch_prober.go:28] interesting pod/route-controller-manager-6cfd9fc8fc-7sbzw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.216741 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.409079 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ad657a4-8b02-4373-8d0d-b0e25345dc90","Type":"ContainerStarted","Data":"7be671fc50422e885dbb1fec6a6c30037cba5481e39185347522a94f177d9763"} Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.500363 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=11.500303538 podStartE2EDuration="11.500303538s" podCreationTimestamp="2025-08-13 20:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:28.495690207 +0000 UTC m=+995.188354975" watchObservedRunningTime="2025-08-13 20:00:28.500303538 +0000 UTC m=+995.192968266" Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.958488 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-67685c4459-7p2h8_a560ec6a-586f-403c-a08e-e3a76fa1b7fd/controller-manager/0.log" Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.958581 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.062890 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78589965b8-vmcwt"] Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.063091 4183 topology_manager.go:215] "Topology Admit Handler" podUID="00d32440-4cce-4609-96f3-51ac94480aab" podNamespace="openshift-controller-manager" podName="controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: E0813 20:00:29.063268 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerName="controller-manager" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.063287 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerName="controller-manager" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.063420 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerName="controller-manager" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.063968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.072336 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca\") pod \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.072441 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w8t6\" (UniqueName: \"kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6\") pod \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.072480 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles\") pod \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.072519 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert\") pod \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.072558 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config\") pod \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.074365 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca" (OuterVolumeSpecName: "client-ca") pod "a560ec6a-586f-403c-a08e-e3a76fa1b7fd" (UID: "a560ec6a-586f-403c-a08e-e3a76fa1b7fd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.075255 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config" (OuterVolumeSpecName: "config") pod "a560ec6a-586f-403c-a08e-e3a76fa1b7fd" (UID: "a560ec6a-586f-403c-a08e-e3a76fa1b7fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.075384 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a560ec6a-586f-403c-a08e-e3a76fa1b7fd" (UID: "a560ec6a-586f-403c-a08e-e3a76fa1b7fd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.097608 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a560ec6a-586f-403c-a08e-e3a76fa1b7fd" (UID: "a560ec6a-586f-403c-a08e-e3a76fa1b7fd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.098220 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6" (OuterVolumeSpecName: "kube-api-access-5w8t6") pod "a560ec6a-586f-403c-a08e-e3a76fa1b7fd" (UID: "a560ec6a-586f-403c-a08e-e3a76fa1b7fd"). InnerVolumeSpecName "kube-api-access-5w8t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.175480 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.175590 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.175748 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqzj5\" (UniqueName: \"kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.175897 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176096 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176150 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176166 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5w8t6\" (UniqueName: \"kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176182 4183 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176199 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176210 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.227261 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" path="/var/lib/kubelet/pods/af6b67a3-a2bd-4051-9adc-c208a5a65d79/volumes" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.238069 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" path="/var/lib/kubelet/pods/c5bb4cdd-21b9-49ed-84ae-a405b60a0306/volumes" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.277915 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.278005 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.278062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqzj5\" (UniqueName: \"kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.278102 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.278165 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.280764 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.289748 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.303540 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.297027 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.446095 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-67685c4459-7p2h8_a560ec6a-586f-403c-a08e-e3a76fa1b7fd/controller-manager/0.log" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.447603 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.448594 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" event={"ID":"a560ec6a-586f-403c-a08e-e3a76fa1b7fd","Type":"ContainerDied","Data":"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa"} Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.448637 4183 scope.go:117] "RemoveContainer" containerID="7772cfe77a9084a8b1da62b48709afa4195652cf6fbe8e33fe7a5414394f71e7" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.534635 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqzj5\" (UniqueName: \"kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.542744 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.547562 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.580460 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.580551 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.727692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.759209 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78589965b8-vmcwt"] Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.892572 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.908205 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.302407 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.435154 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qgvb\" (UniqueName: \"kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb\") pod \"1713e8bc-bab0-49a8-8618-9ded2e18906c\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.435222 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config\") pod \"1713e8bc-bab0-49a8-8618-9ded2e18906c\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.435287 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca\") pod \"1713e8bc-bab0-49a8-8618-9ded2e18906c\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.435338 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert\") pod \"1713e8bc-bab0-49a8-8618-9ded2e18906c\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.438191 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config" (OuterVolumeSpecName: "config") pod "1713e8bc-bab0-49a8-8618-9ded2e18906c" (UID: "1713e8bc-bab0-49a8-8618-9ded2e18906c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.443688 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca" (OuterVolumeSpecName: "client-ca") pod "1713e8bc-bab0-49a8-8618-9ded2e18906c" (UID: "1713e8bc-bab0-49a8-8618-9ded2e18906c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.458748 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1713e8bc-bab0-49a8-8618-9ded2e18906c" (UID: "1713e8bc-bab0-49a8-8618-9ded2e18906c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.496356 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb" (OuterVolumeSpecName: "kube-api-access-9qgvb") pod "1713e8bc-bab0-49a8-8618-9ded2e18906c" (UID: "1713e8bc-bab0-49a8-8618-9ded2e18906c"). InnerVolumeSpecName "kube-api-access-9qgvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.523608 4183 generic.go:334] "Generic (PLEG): container finished" podID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" containerID="c39ec2f009f84a11146853eb53b1073037d39ef91f4d853abf6b613d7e2758e6" exitCode=0 Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.523720 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerDied","Data":"c39ec2f009f84a11146853eb53b1073037d39ef91f4d853abf6b613d7e2758e6"} Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.524488 4183 scope.go:117] "RemoveContainer" containerID="c39ec2f009f84a11146853eb53b1073037d39ef91f4d853abf6b613d7e2758e6" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.538585 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.538648 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.538667 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9qgvb\" (UniqueName: \"kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.538681 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.546888 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" event={"ID":"1713e8bc-bab0-49a8-8618-9ded2e18906c","Type":"ContainerDied","Data":"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715"} Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.547014 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.547043 4183 scope.go:117] "RemoveContainer" containerID="6f473c92f07e1c47edf5b8e65134aeb43315eb0c72514a8b4132da92f81b1fe5" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.863030 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7cbd5666ff-bbfrf"] Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.873688 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.902979 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.987534 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-75779c45fd-v2j2v"] Aug 13 20:00:30 crc kubenswrapper[4183]: W0813 20:00:30.987941 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9a7bc46_2f44_4aff_9cb5_97c97a4a8319.slice/crio-7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e WatchSource:0}: Error finding container 7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e: Status 404 returned error can't find the container with id 7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.086667 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78589965b8-vmcwt"] Aug 13 20:00:31 crc kubenswrapper[4183]: W0813 20:00:31.106958 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00d32440_4cce_4609_96f3_51ac94480aab.slice/crio-97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9 WatchSource:0}: Error finding container 97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9: Status 404 returned error can't find the container with id 97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9 Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.228752 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" path="/var/lib/kubelet/pods/1713e8bc-bab0-49a8-8618-9ded2e18906c/volumes" Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.230549 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" path="/var/lib/kubelet/pods/a560ec6a-586f-403c-a08e-e3a76fa1b7fd/volumes" Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.586239 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" event={"ID":"42b6a393-6194-4620-bf8f-7e4b6cbe5679","Type":"ContainerStarted","Data":"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4"} Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.596368 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" event={"ID":"00d32440-4cce-4609-96f3-51ac94480aab","Type":"ContainerStarted","Data":"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9"} Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.624983 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerStarted","Data":"7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e"} Aug 13 20:00:32 crc kubenswrapper[4183]: E0813 20:00:32.222479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 20:00:32 crc kubenswrapper[4183]: I0813 20:00:32.647092 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerStarted","Data":"e95a2bd82003b18d4f81fa9d98e21982ecce835638a4f389a02f1c7db1efd2d6"} Aug 13 20:00:33 crc kubenswrapper[4183]: E0813 20:00:33.233310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.403280 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh"] Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.403521 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: E0813 20:00:33.411971 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerName="route-controller-manager" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.412025 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerName="route-controller-manager" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.412233 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerName="route-controller-manager" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.413558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.435584 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.435944 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.435598 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.436371 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.435720 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.445125 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.511701 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh"] Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.515590 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hdnq\" (UniqueName: \"kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.515713 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.515757 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.515908 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.618353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5hdnq\" (UniqueName: \"kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.618508 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.618536 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.618569 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.620528 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.620550 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.636224 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.656596 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" event={"ID":"42b6a393-6194-4620-bf8f-7e4b6cbe5679","Type":"ContainerStarted","Data":"32fd955a56de5925978ca9c74fd5477e1123ae91904669c797c57e09bb337d84"} Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.669757 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" event={"ID":"00d32440-4cce-4609-96f3-51ac94480aab","Type":"ContainerStarted","Data":"71a0cdc384f9d93ad108bee372da2b3e7dddb9b98c65c36f3ddbf584a54fd830"} Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.672107 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.686249 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.686351 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.687119 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerStarted","Data":"dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53"} Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.688349 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.881044 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hdnq\" (UniqueName: \"kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.989830 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podStartSLOduration=35619978.989690684 podStartE2EDuration="9894h26m18.989690681s" podCreationTimestamp="2024-06-27 13:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:33.978483142 +0000 UTC m=+1000.671147910" watchObservedRunningTime="2025-08-13 20:00:33.989690681 +0000 UTC m=+1000.682355409" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.051124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.153396 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podStartSLOduration=10.153340467 podStartE2EDuration="10.153340467s" podCreationTimestamp="2025-08-13 20:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:34.152671758 +0000 UTC m=+1000.845336576" watchObservedRunningTime="2025-08-13 20:00:34.153340467 +0000 UTC m=+1000.846005335" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.752986 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerDied","Data":"cd3ef5d43082d2ea06ff8ebdc73d431372f8a376212f30c5008a7b9579df7014"} Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.755623 4183 scope.go:117] "RemoveContainer" containerID="cd3ef5d43082d2ea06ff8ebdc73d431372f8a376212f30c5008a7b9579df7014" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.778290 4183 generic.go:334] "Generic (PLEG): container finished" podID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" containerID="cd3ef5d43082d2ea06ff8ebdc73d431372f8a376212f30c5008a7b9579df7014" exitCode=0 Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.784930 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.811093 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.876467 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.877102 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.877160 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.878764 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.878979 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a" gracePeriod=2 Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.883544 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.883678 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.884083 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.884124 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.949186 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.949289 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.099506 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podStartSLOduration=10.091607161 podStartE2EDuration="10.091607161s" podCreationTimestamp="2025-08-13 20:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:34.228453009 +0000 UTC m=+1000.921117757" watchObservedRunningTime="2025-08-13 20:00:35.091607161 +0000 UTC m=+1001.784272259" Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.793329 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.log" Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.793791 4183 generic.go:334] "Generic (PLEG): container finished" podID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" containerID="47802e2c3506925156013fb9ab1b2e35c0b10d40b6540eabeb02eed57b691069" exitCode=1 Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.793984 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerDied","Data":"47802e2c3506925156013fb9ab1b2e35c0b10d40b6540eabeb02eed57b691069"} Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.794920 4183 scope.go:117] "RemoveContainer" containerID="47802e2c3506925156013fb9ab1b2e35c0b10d40b6540eabeb02eed57b691069" Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.802757 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a" exitCode=0 Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.804097 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a"} Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.804154 4183 scope.go:117] "RemoveContainer" containerID="f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f" Aug 13 20:00:36 crc kubenswrapper[4183]: E0813 20:00:36.213445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 20:00:36 crc kubenswrapper[4183]: I0813 20:00:36.534373 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh"] Aug 13 20:00:36 crc kubenswrapper[4183]: I0813 20:00:36.810824 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" event={"ID":"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d","Type":"ContainerStarted","Data":"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e"} Aug 13 20:00:36 crc kubenswrapper[4183]: I0813 20:00:36.955501 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:00:36 crc kubenswrapper[4183]: I0813 20:00:36.958703 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-9-crc" podUID="227e3650-2a85-4229-8099-bb53972635b2" containerName="installer" containerID="cri-o://1bbed3b469cb62a0e76b6e9718249f34f00007dc9f9e6dcd22b346fb357ece99" gracePeriod=30 Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.119038 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-7-crc"] Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.119167 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" podNamespace="openshift-kube-scheduler" podName="installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.120818 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.138623 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.147529 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-9ln8g" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.150315 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.150644 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.150879 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.238027 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-7-crc"] Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.253661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.253867 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.254054 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.261225 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.261665 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.605668 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.792007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.804994 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-10-crc"] Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.814754 4183 topology_manager.go:215] "Topology Admit Handler" podUID="2f155735-a9be-4621-a5f2-5ab4b6957acd" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.816472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.880656 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.880746 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.983580 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.983635 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.984187 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:38 crc kubenswrapper[4183]: I0813 20:00:38.454577 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-10-crc"] Aug 13 20:00:39 crc kubenswrapper[4183]: E0813 20:00:39.390016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.568118 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.569114 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.582126 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.696974 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-10-crc"] Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.697465 4183 topology_manager.go:215] "Topology Admit Handler" podUID="79050916-d488-4806-b556-1b0078b31e53" podNamespace="openshift-kube-controller-manager" podName="installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.700363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.753930 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" containerID="cri-o://0c7b53a35a67b2526c5310571264cb255c68a5ac90b79fcfed3ea524243646e1" gracePeriod=14 Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.810566 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.810673 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.810716 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.831172 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-10-crc"] Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.944011 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.944184 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.944405 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.944573 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.944690 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:40 crc kubenswrapper[4183]: I0813 20:00:40.096732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:40 crc kubenswrapper[4183]: I0813 20:00:40.416091 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.385622 4183 generic.go:334] "Generic (PLEG): container finished" podID="13ad7555-5f28-4555-a563-892713a8433a" containerID="0c7b53a35a67b2526c5310571264cb255c68a5ac90b79fcfed3ea524243646e1" exitCode=0 Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.386137 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" event={"ID":"13ad7555-5f28-4555-a563-892713a8433a","Type":"ContainerDied","Data":"0c7b53a35a67b2526c5310571264cb255c68a5ac90b79fcfed3ea524243646e1"} Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.401449 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-mtx25"] Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.410324 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" containerID="cri-o://a9c5c60859fe5965d3e56b1f36415e36c4ebccf094bcf5a836013b9db4262143" gracePeriod=90 Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.411028 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://850160bdc6ea5ea83ea4c13388d6776a10113289f49f21b1ead74f152e5a1512" gracePeriod=90 Aug 13 20:00:41 crc kubenswrapper[4183]: E0813 20:00:41.422041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.458973 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-mtx25"] Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.702243 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.703251 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b23d6435-6431-4905-b41b-a517327385e5" podNamespace="openshift-apiserver" podName="apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: E0813 20:00:41.703572 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.703675 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" Aug 13 20:00:41 crc kubenswrapper[4183]: E0813 20:00:41.703766 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver-check-endpoints" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.703958 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver-check-endpoints" Aug 13 20:00:41 crc kubenswrapper[4183]: E0813 20:00:41.704089 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="fix-audit-permissions" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.704172 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="fix-audit-permissions" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.704371 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.704486 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver-check-endpoints" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.705521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.738116 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.834000 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.834386 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.834513 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.834694 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.834930 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835076 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835192 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835300 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835453 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835576 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835753 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j2kj\" (UniqueName: \"kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.939227 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6j2kj\" (UniqueName: \"kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.970617 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.971536 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.974603 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.974710 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.974774 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975084 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975197 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975283 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975327 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975403 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975474 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.979601 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.980346 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.980404 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.994656 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.001627 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.003866 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.016768 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.070052 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.084201 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.107393 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.354144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.892229 4183 generic.go:334] "Generic (PLEG): container finished" podID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerID="850160bdc6ea5ea83ea4c13388d6776a10113289f49f21b1ead74f152e5a1512" exitCode=0 Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.240716 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336192 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336314 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336353 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336387 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336422 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336463 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336507 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336572 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336627 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336677 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336719 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336762 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336889 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336935 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.342265 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.358965 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.362115 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.362656 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.363739 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.380757 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68" (OuterVolumeSpecName: "kube-api-access-w4r68") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "kube-api-access-w4r68". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.411029 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.412205 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.412924 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.412973 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.414127 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.421348 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.424319 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.427660 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439072 4183 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439131 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439151 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439165 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439179 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439193 4183 reconciler_common.go:300] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439206 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439219 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439231 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439245 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439258 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439272 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439283 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439296 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.005191 4183 generic.go:334] "Generic (PLEG): container finished" podID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" containerID="346fc13eab4a6442e7eb6bb7019dac9a1216274ae59cd519b5e7474a1dd1b4e2" exitCode=0 Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.005354 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerDied","Data":"346fc13eab4a6442e7eb6bb7019dac9a1216274ae59cd519b5e7474a1dd1b4e2"} Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.006295 4183 scope.go:117] "RemoveContainer" containerID="346fc13eab4a6442e7eb6bb7019dac9a1216274ae59cd519b5e7474a1dd1b4e2" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.074016 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" event={"ID":"13ad7555-5f28-4555-a563-892713a8433a","Type":"ContainerDied","Data":"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141"} Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.074906 4183 scope.go:117] "RemoveContainer" containerID="0c7b53a35a67b2526c5310571264cb255c68a5ac90b79fcfed3ea524243646e1" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.075503 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.871563 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.871677 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.884710 4183 patch_prober.go:28] interesting pod/authentication-operator-7cc7ff75d5-g9qv8 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.884925 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.952264 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.953407 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.272608 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.log" Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.503656 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j2kj\" (UniqueName: \"kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.603890 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"f8740679d62a596414a4bace5b51c52a61eb8997cb3aad74b6e37fb0898cbd9a"} Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.663716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.788531 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-10-crc"] Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.872562 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-10-crc"] Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.899327 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-7-crc"] Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.265636 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b"] Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.266229 4183 topology_manager.go:215] "Topology Admit Handler" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" podNamespace="openshift-authentication" podName="oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: E0813 20:00:46.266462 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.266482 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.266635 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.284461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.369983 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.378862 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.379339 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.379608 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.379753 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.380041 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.380171 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.380307 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.380571 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.385252 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.385923 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.386294 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.414543 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.454696 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.455345 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.463164 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.463969 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.466214 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.471661 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.472147 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.472334 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.472521 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.472656 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.467659 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.474041 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.507414 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6sd5l" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.511328 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576295 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576402 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576508 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576539 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576562 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576589 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576621 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576650 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576690 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576717 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.583259 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.592742 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.592943 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.592999 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.636523 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-765b47f944-n2lhl"] Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.647947 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.648016 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.649387 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.683061 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.689520 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.733286 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.736500 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.750459 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.753396 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.761600 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b"] Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.790375 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.799700 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.820428 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.820881 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.891525 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.956583 4183 generic.go:334] "Generic (PLEG): container finished" podID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" containerID="2c4363bf35c3850ea69697df9035284b39acfc987f5b168c9bf3f20002f44039" exitCode=0 Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.956890 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerDied","Data":"2c4363bf35c3850ea69697df9035284b39acfc987f5b168c9bf3f20002f44039"} Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.957877 4183 scope.go:117] "RemoveContainer" containerID="2c4363bf35c3850ea69697df9035284b39acfc987f5b168c9bf3f20002f44039" Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.161170 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-7-crc" event={"ID":"b57cce81-8ea0-4c4d-aae1-ee024d201c15","Type":"ContainerStarted","Data":"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab"} Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.176297 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-765b47f944-n2lhl"] Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.185972 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.304578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.400373 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13ad7555-5f28-4555-a563-892713a8433a" path="/var/lib/kubelet/pods/13ad7555-5f28-4555-a563-892713a8433a/volumes" Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.558469 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" event={"ID":"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d","Type":"ContainerStarted","Data":"417399fd591cd0cade9e86c96a7f4a9443d365dc57f627f00e02594fd8957bf3"} Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.560090 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.837463 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-10-crc" event={"ID":"2f155735-a9be-4621-a5f2-5ab4b6957acd","Type":"ContainerStarted","Data":"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5"} Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.067045 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.log" Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.067940 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerStarted","Data":"043a876882e6525ddc5f76decf1b6c920a7b88ce28a2364941d8f877fa66e241"} Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.239693 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.501762 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5"} Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.519739 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.519982 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.520026 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.607341 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-crc" event={"ID":"79050916-d488-4806-b556-1b0078b31e53","Type":"ContainerStarted","Data":"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc"} Aug 13 20:00:49 crc kubenswrapper[4183]: I0813 20:00:49.547720 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:00:49 crc kubenswrapper[4183]: I0813 20:00:49.549557 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:00:51 crc kubenswrapper[4183]: I0813 20:00:51.371645 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:51 crc kubenswrapper[4183]: I0813 20:00:51.372722 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.696048 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.696731 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.696861 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.696908 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.696966 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.881030 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.882103 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.881030 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.882186 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.884295 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.952035 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.954131 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:55 crc kubenswrapper[4183]: I0813 20:00:55.205724 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 20:00:55 crc kubenswrapper[4183]: I0813 20:00:55.978620 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-9-crc_227e3650-2a85-4229-8099-bb53972635b2/installer/0.log" Aug 13 20:00:55 crc kubenswrapper[4183]: I0813 20:00:55.981442 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-9-crc" event={"ID":"227e3650-2a85-4229-8099-bb53972635b2","Type":"ContainerDied","Data":"1bbed3b469cb62a0e76b6e9718249f34f00007dc9f9e6dcd22b346fb357ece99"} Aug 13 20:00:55 crc kubenswrapper[4183]: I0813 20:00:55.986820 4183 generic.go:334] "Generic (PLEG): container finished" podID="227e3650-2a85-4229-8099-bb53972635b2" containerID="1bbed3b469cb62a0e76b6e9718249f34f00007dc9f9e6dcd22b346fb357ece99" exitCode=1 Aug 13 20:00:56 crc kubenswrapper[4183]: I0813 20:00:56.700337 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]log ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:00:56 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:00:56 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:00:56 crc kubenswrapper[4183]: readyz check failed Aug 13 20:00:56 crc kubenswrapper[4183]: I0813 20:00:56.700486 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:56 crc kubenswrapper[4183]: I0813 20:00:56.700620 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:00:57 crc kubenswrapper[4183]: I0813 20:00:57.632304 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:58 crc kubenswrapper[4183]: I0813 20:00:58.184180 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:59 crc kubenswrapper[4183]: I0813 20:00:59.540555 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:00:59 crc kubenswrapper[4183]: I0813 20:00:59.541338 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:00:59 crc kubenswrapper[4183]: I0813 20:00:59.540701 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerStarted","Data":"524f541503e673b38ef89e50d9e4dfc8448cecf293a683f236b94f15ea56379f"} Aug 13 20:00:59 crc kubenswrapper[4183]: I0813 20:00:59.623278 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerStarted","Data":"d21952f722a78650eafeaffd3eee446ec3e6f45e0e0dff0fde9b755169ca68a0"} Aug 13 20:00:59 crc kubenswrapper[4183]: I0813 20:00:59.986334 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b"] Aug 13 20:01:00 crc kubenswrapper[4183]: I0813 20:01:00.033563 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:01:00 crc kubenswrapper[4183]: W0813 20:01:00.559067 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb23d6435_6431_4905_b41b_a517327385e5.slice/crio-411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58 WatchSource:0}: Error finding container 411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58: Status 404 returned error can't find the container with id 411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58 Aug 13 20:01:00 crc kubenswrapper[4183]: W0813 20:01:00.777733 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01feb2e0_a0f4_4573_8335_34e364e0ef40.slice/crio-ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404 WatchSource:0}: Error finding container ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404: Status 404 returned error can't find the container with id ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404 Aug 13 20:01:01 crc kubenswrapper[4183]: I0813 20:01:01.334242 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58"} Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.077330 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-9-crc_227e3650-2a85-4229-8099-bb53972635b2/installer/0.log" Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.079077 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.701589 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-9-crc_227e3650-2a85-4229-8099-bb53972635b2/installer/0.log" Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.702169 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-9-crc" event={"ID":"227e3650-2a85-4229-8099-bb53972635b2","Type":"ContainerDied","Data":"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef"} Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.702390 4183 scope.go:117] "RemoveContainer" containerID="1bbed3b469cb62a0e76b6e9718249f34f00007dc9f9e6dcd22b346fb357ece99" Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.702657 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:01:03 crc kubenswrapper[4183]: I0813 20:01:03.198645 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" event={"ID":"01feb2e0-a0f4-4573-8335-34e364e0ef40","Type":"ContainerStarted","Data":"ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404"} Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.873700 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.874405 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.876409 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.876497 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.949495 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.949643 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:01:05 crc kubenswrapper[4183]: I0813 20:01:05.275984 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:05 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:05 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:05 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:05 crc kubenswrapper[4183]: I0813 20:01:05.276114 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:05 crc kubenswrapper[4183]: I0813 20:01:05.481071 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.005457 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access\") pod \"227e3650-2a85-4229-8099-bb53972635b2\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.006124 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock\") pod \"227e3650-2a85-4229-8099-bb53972635b2\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.006301 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir\") pod \"227e3650-2a85-4229-8099-bb53972635b2\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.010689 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock" (OuterVolumeSpecName: "var-lock") pod "227e3650-2a85-4229-8099-bb53972635b2" (UID: "227e3650-2a85-4229-8099-bb53972635b2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.010732 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "227e3650-2a85-4229-8099-bb53972635b2" (UID: "227e3650-2a85-4229-8099-bb53972635b2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.032166 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "227e3650-2a85-4229-8099-bb53972635b2" (UID: "227e3650-2a85-4229-8099-bb53972635b2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.108676 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.108732 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.120371 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:01:07 crc kubenswrapper[4183]: I0813 20:01:07.572965 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podStartSLOduration=42.572913908 podStartE2EDuration="42.572913908s" podCreationTimestamp="2025-08-13 20:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:01:06.779492984 +0000 UTC m=+1033.472157982" watchObservedRunningTime="2025-08-13 20:01:07.572913908 +0000 UTC m=+1034.265578806" Aug 13 20:01:07 crc kubenswrapper[4183]: I0813 20:01:07.619329 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-7-crc" event={"ID":"b57cce81-8ea0-4c4d-aae1-ee024d201c15","Type":"ContainerStarted","Data":"c790588ca0e77460d01591ce4be738641e9b039fdf1cb3c3fdd37a9243b55f83"} Aug 13 20:01:08 crc kubenswrapper[4183]: I0813 20:01:08.424319 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-10-crc" event={"ID":"2f155735-a9be-4621-a5f2-5ab4b6957acd","Type":"ContainerStarted","Data":"e7256098c4244337df430457265e378ddf1b268c176bafd4d6fa5a52a80adfe5"} Aug 13 20:01:08 crc kubenswrapper[4183]: I0813 20:01:08.733261 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-7cbd5666ff-bbfrf"] Aug 13 20:01:10 crc kubenswrapper[4183]: I0813 20:01:10.200767 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:10 crc kubenswrapper[4183]: I0813 20:01:10.201015 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:01:10 crc kubenswrapper[4183]: I0813 20:01:10.316197 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-console/console-84fccc7b6-mkncc"] Aug 13 20:01:10 crc kubenswrapper[4183]: E0813 20:01:10.498578 4183 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod2f155735_a9be_4621_a5f2_5ab4b6957acd.slice/crio-e7256098c4244337df430457265e378ddf1b268c176bafd4d6fa5a52a80adfe5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod2f155735_a9be_4621_a5f2_5ab4b6957acd.slice/crio-conmon-e7256098c4244337df430457265e378ddf1b268c176bafd4d6fa5a52a80adfe5.scope\": RecentStats: unable to find data in memory cache]" Aug 13 20:01:10 crc kubenswrapper[4183]: I0813 20:01:10.967968 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-crc" event={"ID":"79050916-d488-4806-b556-1b0078b31e53","Type":"ContainerStarted","Data":"f3271fa1efff9a0885965f0ea8ca31234ba9caefd85007392c549bd273427721"} Aug 13 20:01:12 crc kubenswrapper[4183]: I0813 20:01:12.209177 4183 generic.go:334] "Generic (PLEG): container finished" podID="2f155735-a9be-4621-a5f2-5ab4b6957acd" containerID="e7256098c4244337df430457265e378ddf1b268c176bafd4d6fa5a52a80adfe5" exitCode=0 Aug 13 20:01:12 crc kubenswrapper[4183]: I0813 20:01:12.209422 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-10-crc" event={"ID":"2f155735-a9be-4621-a5f2-5ab4b6957acd","Type":"ContainerDied","Data":"e7256098c4244337df430457265e378ddf1b268c176bafd4d6fa5a52a80adfe5"} Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.357581 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-console/console-644bb77b49-5x5xk"] Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.357749 4183 topology_manager.go:215] "Topology Admit Handler" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" podNamespace="openshift-console" podName="console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: E0813 20:01:14.358204 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="227e3650-2a85-4229-8099-bb53972635b2" containerName="installer" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.358223 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="227e3650-2a85-4229-8099-bb53972635b2" containerName="installer" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.358394 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="227e3650-2a85-4229-8099-bb53972635b2" containerName="installer" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.359130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485496 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485604 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485650 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485691 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485735 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485888 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485974 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.589709 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.591564 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.591746 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.593750 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.593991 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.594177 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.594646 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.602313 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.603191 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.608153 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.609463 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.612142 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.612556 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.872504 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.872632 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.872695 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.874520 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.874583 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5" gracePeriod=2 Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.872512 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.874887 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.876616 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.876700 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.985882 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.985943 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.985989 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.985997 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 20:01:16 crc kubenswrapper[4183]: I0813 20:01:16.667879 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:16 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:16 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:16 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:16 crc kubenswrapper[4183]: I0813 20:01:16.668083 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:16 crc kubenswrapper[4183]: I0813 20:01:16.668168 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:16 crc kubenswrapper[4183]: I0813 20:01:16.745284 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" event={"ID":"01feb2e0-a0f4-4573-8335-34e364e0ef40","Type":"ContainerStarted","Data":"391bd49947a0ae3e13b214a022dc7f8ebc8a0337699d428008fe902a18d050a6"} Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.159036 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/0.log" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.159333 4183 generic.go:334] "Generic (PLEG): container finished" podID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerID="47f4fe3d214f9afb61d4c14f1173afecfd243739000ced3d025f9611dbfd4239" exitCode=1 Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.159362 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerDied","Data":"47f4fe3d214f9afb61d4c14f1173afecfd243739000ced3d025f9611dbfd4239"} Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.159818 4183 scope.go:117] "RemoveContainer" containerID="47f4fe3d214f9afb61d4c14f1173afecfd243739000ced3d025f9611dbfd4239" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.614687 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.673898 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access\") pod \"2f155735-a9be-4621-a5f2-5ab4b6957acd\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.674125 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir\") pod \"2f155735-a9be-4621-a5f2-5ab4b6957acd\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.674669 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2f155735-a9be-4621-a5f2-5ab4b6957acd" (UID: "2f155735-a9be-4621-a5f2-5ab4b6957acd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.720762 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2f155735-a9be-4621-a5f2-5ab4b6957acd" (UID: "2f155735-a9be-4621-a5f2-5ab4b6957acd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.776045 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.776112 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.947235 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-644bb77b49-5x5xk"] Aug 13 20:01:18 crc kubenswrapper[4183]: I0813 20:01:18.410224 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="ee7ad10446d56157471e17a6fd0a6c5ffb7cc6177a566dcf214a0b78b5502ef3" exitCode=0 Aug 13 20:01:18 crc kubenswrapper[4183]: I0813 20:01:18.410384 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"ee7ad10446d56157471e17a6fd0a6c5ffb7cc6177a566dcf214a0b78b5502ef3"} Aug 13 20:01:18 crc kubenswrapper[4183]: I0813 20:01:18.613964 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-10-crc" event={"ID":"2f155735-a9be-4621-a5f2-5ab4b6957acd","Type":"ContainerDied","Data":"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5"} Aug 13 20:01:18 crc kubenswrapper[4183]: I0813 20:01:18.615688 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5" Aug 13 20:01:18 crc kubenswrapper[4183]: I0813 20:01:18.615583 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:01:19 crc kubenswrapper[4183]: I0813 20:01:19.540752 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:19 crc kubenswrapper[4183]: I0813 20:01:19.541070 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.010289 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5" exitCode=0 Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.010422 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5"} Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.010464 4183 scope.go:117] "RemoveContainer" containerID="50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.134694 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.312504 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.355962 4183 generic.go:334] "Generic (PLEG): container finished" podID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" containerID="20a713ea366c19c1b427548e8b8ab979d2ae1d350c086fe1a4874181f4de7687" exitCode=0 Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.359304 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerDied","Data":"20a713ea366c19c1b427548e8b8ab979d2ae1d350c086fe1a4874181f4de7687"} Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.359386 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.360392 4183 scope.go:117] "RemoveContainer" containerID="20a713ea366c19c1b427548e8b8ab979d2ae1d350c086fe1a4874181f4de7687" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.468060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:21 crc kubenswrapper[4183]: I0813 20:01:21.024540 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:01:21 crc kubenswrapper[4183]: I0813 20:01:21.602986 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78589965b8-vmcwt"] Aug 13 20:01:21 crc kubenswrapper[4183]: I0813 20:01:21.603405 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" containerID="cri-o://71a0cdc384f9d93ad108bee372da2b3e7dddb9b98c65c36f3ddbf584a54fd830" gracePeriod=30 Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.206371 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.468707 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/0.log" Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.471111 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerStarted","Data":"de440c5d69c49e4ae9a6d8d6a8c21cebc200a69199b6854aa7edf579fd041ccd"} Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.472858 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.565665 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh"] Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.565985 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" containerID="cri-o://417399fd591cd0cade9e86c96a7f4a9443d365dc57f627f00e02594fd8957bf3" gracePeriod=30 Aug 13 20:01:23 crc kubenswrapper[4183]: I0813 20:01:23.396139 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:01:23 crc kubenswrapper[4183]: I0813 20:01:23.473329 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:23 crc kubenswrapper[4183]: I0813 20:01:23.473426 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:23 crc kubenswrapper[4183]: I0813 20:01:23.625377 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-644bb77b49-5x5xk"] Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.053119 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.053229 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Aug 13 20:01:24 crc kubenswrapper[4183]: W0813 20:01:24.084861 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e649ef6_bbda_4ad9_8a09_ac3803dd0cc1.slice/crio-48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107 WatchSource:0}: Error finding container 48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107: Status 404 returned error can't find the container with id 48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107 Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.294535 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24"} Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.295758 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.295918 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.297091 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.668324 4183 generic.go:334] "Generic (PLEG): container finished" podID="00d32440-4cce-4609-96f3-51ac94480aab" containerID="71a0cdc384f9d93ad108bee372da2b3e7dddb9b98c65c36f3ddbf584a54fd830" exitCode=0 Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.668470 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" event={"ID":"00d32440-4cce-4609-96f3-51ac94480aab","Type":"ContainerDied","Data":"71a0cdc384f9d93ad108bee372da2b3e7dddb9b98c65c36f3ddbf584a54fd830"} Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.871746 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.872426 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.871878 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.872488 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.896333 4183 generic.go:334] "Generic (PLEG): container finished" podID="71af81a9-7d43-49b2-9287-c375900aa905" containerID="e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e" exitCode=0 Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.897921 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerDied","Data":"e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e"} Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.898721 4183 scope.go:117] "RemoveContainer" containerID="e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e" Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.909362 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 20:01:25 crc kubenswrapper[4183]: I0813 20:01:25.425912 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="227e3650-2a85-4229-8099-bb53972635b2" path="/var/lib/kubelet/pods/227e3650-2a85-4229-8099-bb53972635b2/volumes" Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.201431 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-644bb77b49-5x5xk" event={"ID":"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1","Type":"ContainerStarted","Data":"48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107"} Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.469691 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" event={"ID":"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d","Type":"ContainerDied","Data":"417399fd591cd0cade9e86c96a7f4a9443d365dc57f627f00e02594fd8957bf3"} Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.470093 4183 generic.go:334] "Generic (PLEG): container finished" podID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerID="417399fd591cd0cade9e86c96a7f4a9443d365dc57f627f00e02594fd8957bf3" exitCode=0 Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.805506 4183 generic.go:334] "Generic (PLEG): container finished" podID="b54e8941-2fc4-432a-9e51-39684df9089e" containerID="dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540" exitCode=0 Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.805810 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerDied","Data":"dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540"} Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.806954 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.807062 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.807600 4183 scope.go:117] "RemoveContainer" containerID="dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540" Aug 13 20:01:27 crc kubenswrapper[4183]: I0813 20:01:27.650207 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:01:27 crc kubenswrapper[4183]: I0813 20:01:27.650662 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:01:27 crc kubenswrapper[4183]: I0813 20:01:27.653706 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:01:27 crc kubenswrapper[4183]: I0813 20:01:27.654104 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:01:28 crc kubenswrapper[4183]: I0813 20:01:28.295104 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerStarted","Data":"2af5bb0c4b139d706151f3201c47d8cc989a3569891ca64ddff1c17afff77399"} Aug 13 20:01:29 crc kubenswrapper[4183]: I0813 20:01:29.540695 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:29 crc kubenswrapper[4183]: I0813 20:01:29.541479 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.649538 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.650102 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.649680 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.650213 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.732117 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.732259 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:31 crc kubenswrapper[4183]: I0813 20:01:31.296466 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"98e20994b78d70c7d9739afcbef1576151aa009516cab8609a2c74b997bfed1a"} Aug 13 20:01:31 crc kubenswrapper[4183]: I0813 20:01:31.307275 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:31 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:31 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:31 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:31 crc kubenswrapper[4183]: I0813 20:01:31.307529 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:31 crc kubenswrapper[4183]: I0813 20:01:31.307770 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:31 crc kubenswrapper[4183]: I0813 20:01:31.525000 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.590474 4183 generic.go:334] "Generic (PLEG): container finished" podID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" containerID="de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220" exitCode=0 Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.591013 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerDied","Data":"de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220"} Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.591986 4183 scope.go:117] "RemoveContainer" containerID="de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220" Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.798229 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/0.log" Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.799503 4183 generic.go:334] "Generic (PLEG): container finished" podID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerID="a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b" exitCode=0 Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.799574 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerDied","Data":"a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b"} Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.799630 4183 scope.go:117] "RemoveContainer" containerID="f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b" Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.800480 4183 scope.go:117] "RemoveContainer" containerID="a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b" Aug 13 20:01:33 crc kubenswrapper[4183]: I0813 20:01:33.649066 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:01:33 crc kubenswrapper[4183]: I0813 20:01:33.649137 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:01:34 crc kubenswrapper[4183]: I0813 20:01:34.873292 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:34 crc kubenswrapper[4183]: I0813 20:01:34.873437 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:34 crc kubenswrapper[4183]: I0813 20:01:34.873433 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:34 crc kubenswrapper[4183]: I0813 20:01:34.873679 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:35 crc kubenswrapper[4183]: I0813 20:01:35.052072 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:35 crc kubenswrapper[4183]: I0813 20:01:35.052240 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:35 crc kubenswrapper[4183]: I0813 20:01:35.307817 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-7-crc" podStartSLOduration=58.307555991 podStartE2EDuration="58.307555991s" podCreationTimestamp="2025-08-13 20:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:01:35.303077173 +0000 UTC m=+1061.995741941" watchObservedRunningTime="2025-08-13 20:01:35.307555991 +0000 UTC m=+1062.000220839" Aug 13 20:01:35 crc kubenswrapper[4183]: I0813 20:01:35.309160 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-10-crc" podStartSLOduration=56.309123315 podStartE2EDuration="56.309123315s" podCreationTimestamp="2025-08-13 20:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:01:27.138539278 +0000 UTC m=+1053.831204276" watchObservedRunningTime="2025-08-13 20:01:35.309123315 +0000 UTC m=+1062.001788104" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.078709 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" containerID="cri-o://32fd955a56de5925978ca9c74fd5477e1123ae91904669c797c57e09bb337d84" gracePeriod=28 Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.273056 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" containerID="cri-o://a4a4a30f20f748c27de48f589b297456dbde26c9c06b9c1e843ce69a376e85a9" gracePeriod=15 Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.668612 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:36 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:36 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:36 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.668747 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.668916 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.890298 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/0.log" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.890423 4183 generic.go:334] "Generic (PLEG): container finished" podID="0f394926-bdb9-425c-b36e-264d7fd34550" containerID="30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d" exitCode=1 Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.890579 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerDied","Data":"30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d"} Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.891407 4183 scope.go:117] "RemoveContainer" containerID="30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.895752 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84fccc7b6-mkncc_b233d916-bfe3-4ae5-ae39-6b574d1aa05e/console/0.log" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.895915 4183 generic.go:334] "Generic (PLEG): container finished" podID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerID="a4a4a30f20f748c27de48f589b297456dbde26c9c06b9c1e843ce69a376e85a9" exitCode=2 Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.895953 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fccc7b6-mkncc" event={"ID":"b233d916-bfe3-4ae5-ae39-6b574d1aa05e","Type":"ContainerDied","Data":"a4a4a30f20f748c27de48f589b297456dbde26c9c06b9c1e843ce69a376e85a9"} Aug 13 20:01:37 crc kubenswrapper[4183]: I0813 20:01:37.616220 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:01:37 crc kubenswrapper[4183]: I0813 20:01:37.616433 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:01:39 crc kubenswrapper[4183]: I0813 20:01:39.540023 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:39 crc kubenswrapper[4183]: I0813 20:01:39.540131 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:01:39 crc kubenswrapper[4183]: I0813 20:01:39.995494 4183 generic.go:334] "Generic (PLEG): container finished" podID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerID="32fd955a56de5925978ca9c74fd5477e1123ae91904669c797c57e09bb337d84" exitCode=0 Aug 13 20:01:39 crc kubenswrapper[4183]: I0813 20:01:39.995692 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" event={"ID":"42b6a393-6194-4620-bf8f-7e4b6cbe5679","Type":"ContainerDied","Data":"32fd955a56de5925978ca9c74fd5477e1123ae91904669c797c57e09bb337d84"} Aug 13 20:01:40 crc kubenswrapper[4183]: I0813 20:01:40.005343 4183 generic.go:334] "Generic (PLEG): container finished" podID="cc291782-27d2-4a74-af79-c7dcb31535d2" containerID="ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce" exitCode=0 Aug 13 20:01:40 crc kubenswrapper[4183]: I0813 20:01:40.005439 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerDied","Data":"ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce"} Aug 13 20:01:40 crc kubenswrapper[4183]: I0813 20:01:40.006541 4183 scope.go:117] "RemoveContainer" containerID="ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce" Aug 13 20:01:40 crc kubenswrapper[4183]: I0813 20:01:40.729951 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:40 crc kubenswrapper[4183]: I0813 20:01:40.730089 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.098301 4183 generic.go:334] "Generic (PLEG): container finished" podID="6d67253e-2acd-4bc1-8185-793587da4f17" containerID="de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc" exitCode=0 Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.098414 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerDied","Data":"de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc"} Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.099636 4183 scope.go:117] "RemoveContainer" containerID="de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.872298 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.872449 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.873231 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.873354 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.873415 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.875268 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.875340 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24" gracePeriod=2 Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.876252 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.876316 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.991710 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:44 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:44 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:44 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.993555 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:45 crc kubenswrapper[4183]: I0813 20:01:45.053241 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:45 crc kubenswrapper[4183]: I0813 20:01:45.053396 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:47 crc kubenswrapper[4183]: I0813 20:01:47.001768 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:47 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:47 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:47 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:47 crc kubenswrapper[4183]: I0813 20:01:47.002276 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:47 crc kubenswrapper[4183]: I0813 20:01:47.615729 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:01:47 crc kubenswrapper[4183]: I0813 20:01:47.616442 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:01:49 crc kubenswrapper[4183]: I0813 20:01:49.245860 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:49 crc kubenswrapper[4183]: [-]etcd failed: reason withheld Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-api-request-count-filter ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startkubeinformers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-admission-initializer ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-consumer ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-filter ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-informers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-controllers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/crd-informer-synced ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-service-ip-repair-controllers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/rbac/bootstrap-roles ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-producer ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-system-namespaces-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/bootstrap-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-cluster-authentication-info-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-legacy-token-tracking-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-kube-aggregator-informers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-registration-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-status-available-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-wait-for-first-sync ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/kube-apiserver-autoregistration ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]autoregister-completion ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapi-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapiv3-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-discovery-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: livez check failed Aug 13 20:01:49 crc kubenswrapper[4183]: I0813 20:01:49.246065 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:49 crc kubenswrapper[4183]: I0813 20:01:49.540146 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:49 crc kubenswrapper[4183]: I0813 20:01:49.540335 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:01:50 crc kubenswrapper[4183]: I0813 20:01:50.580248 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Aug 13 20:01:50 crc kubenswrapper[4183]: I0813 20:01:50.580359 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Aug 13 20:01:50 crc kubenswrapper[4183]: I0813 20:01:50.729450 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:50 crc kubenswrapper[4183]: I0813 20:01:50.729579 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.008964 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:52 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:52 crc kubenswrapper[4183]: [+]api-openshift-apiserver-available ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]api-openshift-oauth-apiserver-available ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-api-request-count-filter ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startkubeinformers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-admission-initializer ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-consumer ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-filter ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-informers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-controllers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/crd-informer-synced ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-service-ip-repair-controllers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/rbac/bootstrap-roles ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-producer ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-system-namespaces-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/bootstrap-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-cluster-authentication-info-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-legacy-token-tracking-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-kube-aggregator-informers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-registration-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-status-available-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-wait-for-first-sync ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/kube-apiserver-autoregistration ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]autoregister-completion ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapi-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapiv3-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-discovery-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:52 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.011833 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.012278 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.362490 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.486931 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager/0.log" Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.487071 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" exitCode=1 Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.487115 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.489136 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:01:53 crc kubenswrapper[4183]: I0813 20:01:53.149519 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:01:53 crc kubenswrapper[4183]: I0813 20:01:53.513200 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24" exitCode=0 Aug 13 20:01:53 crc kubenswrapper[4183]: I0813 20:01:53.513465 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24"} Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.654140 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.654271 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.662178 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:54 crc kubenswrapper[4183]: [-]etcd failed: reason withheld Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:54 crc kubenswrapper[4183]: healthz check failed Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.662334 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.697503 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.697616 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.697708 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.697940 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.697999 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.872519 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.872695 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:55 crc kubenswrapper[4183]: I0813 20:01:55.052469 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:55 crc kubenswrapper[4183]: I0813 20:01:55.052615 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:56 crc kubenswrapper[4183]: I0813 20:01:56.187358 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:56 crc kubenswrapper[4183]: [-]etcd failed: reason withheld Aug 13 20:01:56 crc kubenswrapper[4183]: [+]etcd-readiness ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]api-openshift-apiserver-available ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]api-openshift-oauth-apiserver-available ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-api-request-count-filter ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startkubeinformers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-admission-initializer ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-consumer ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-filter ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-informers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-controllers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/crd-informer-synced ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-service-ip-repair-controllers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/rbac/bootstrap-roles ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-producer ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-system-namespaces-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/bootstrap-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-cluster-authentication-info-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-legacy-token-tracking-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-kube-aggregator-informers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-registration-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-status-available-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-wait-for-first-sync ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/kube-apiserver-autoregistration ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]autoregister-completion ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapi-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapiv3-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-discovery-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:56 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:56 crc kubenswrapper[4183]: I0813 20:01:56.188201 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:57 crc kubenswrapper[4183]: I0813 20:01:57.615874 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:01:57 crc kubenswrapper[4183]: I0813 20:01:57.616124 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:01:57 crc kubenswrapper[4183]: I0813 20:01:57.616274 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:01:57 crc kubenswrapper[4183]: I0813 20:01:57.705528 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:01:58 crc kubenswrapper[4183]: I0813 20:01:58.104674 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:58 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:58 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:58 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:58 crc kubenswrapper[4183]: I0813 20:01:58.104897 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:58 crc kubenswrapper[4183]: I0813 20:01:58.249211 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podStartSLOduration=81.249140383 podStartE2EDuration="1m21.249140383s" podCreationTimestamp="2025-08-13 20:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:01:58.246314082 +0000 UTC m=+1084.938978760" watchObservedRunningTime="2025-08-13 20:01:58.249140383 +0000 UTC m=+1084.941805101" Aug 13 20:01:59 crc kubenswrapper[4183]: I0813 20:01:59.540096 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:59 crc kubenswrapper[4183]: I0813 20:01:59.540175 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:00 crc kubenswrapper[4183]: I0813 20:02:00.577590 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:02:00 crc kubenswrapper[4183]: I0813 20:02:00.729112 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:00 crc kubenswrapper[4183]: I0813 20:02:00.729322 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:01 crc kubenswrapper[4183]: I0813 20:02:01.333608 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:02:01 crc kubenswrapper[4183]: I0813 20:02:01.334488 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:03 crc kubenswrapper[4183]: I0813 20:02:03.281117 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]log ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:02:03 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:02:03 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:02:03 crc kubenswrapper[4183]: readyz check failed Aug 13 20:02:03 crc kubenswrapper[4183]: I0813 20:02:03.281331 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:02:03 crc kubenswrapper[4183]: I0813 20:02:03.281457 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:02:03 crc kubenswrapper[4183]: I0813 20:02:03.477433 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:02:04 crc kubenswrapper[4183]: I0813 20:02:04.871283 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:04 crc kubenswrapper[4183]: I0813 20:02:04.871391 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:05 crc kubenswrapper[4183]: I0813 20:02:05.052147 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:05 crc kubenswrapper[4183]: I0813 20:02:05.052528 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:07 crc kubenswrapper[4183]: I0813 20:02:07.615652 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:02:07 crc kubenswrapper[4183]: I0813 20:02:07.617086 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:02:09 crc kubenswrapper[4183]: I0813 20:02:09.539284 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:09 crc kubenswrapper[4183]: I0813 20:02:09.539527 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:10 crc kubenswrapper[4183]: I0813 20:02:10.729873 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:10 crc kubenswrapper[4183]: I0813 20:02:10.729972 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:13 crc kubenswrapper[4183]: I0813 20:02:13.884598 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-mtx25_23eb88d6-6aea-4542-a2b9-8f3fd106b4ab/openshift-apiserver/0.log" Aug 13 20:02:13 crc kubenswrapper[4183]: I0813 20:02:13.891375 4183 generic.go:334] "Generic (PLEG): container finished" podID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerID="a9c5c60859fe5965d3e56b1f36415e36c4ebccf094bcf5a836013b9db4262143" exitCode=137 Aug 13 20:02:14 crc kubenswrapper[4183]: I0813 20:02:14.871947 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:14 crc kubenswrapper[4183]: I0813 20:02:14.872055 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.044158 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]log ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:02:15 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:02:15 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:02:15 crc kubenswrapper[4183]: readyz check failed Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.044241 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.044717 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.053155 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.053264 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.105045 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.908592 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/0.log" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.908860 4183 generic.go:334] "Generic (PLEG): container finished" podID="7d51f445-054a-4e4f-a67b-a828f5a32511" containerID="957c48a64bf505f55933cfc9cf99bce461d72f89938aa38299be4b2e4c832fb2" exitCode=1 Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.908964 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerDied","Data":"957c48a64bf505f55933cfc9cf99bce461d72f89938aa38299be4b2e4c832fb2"} Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.910700 4183 scope.go:117] "RemoveContainer" containerID="957c48a64bf505f55933cfc9cf99bce461d72f89938aa38299be4b2e4c832fb2" Aug 13 20:02:17 crc kubenswrapper[4183]: I0813 20:02:17.616356 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:02:17 crc kubenswrapper[4183]: I0813 20:02:17.616544 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:02:19 crc kubenswrapper[4183]: I0813 20:02:19.539668 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:19 crc kubenswrapper[4183]: I0813 20:02:19.540042 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:20 crc kubenswrapper[4183]: I0813 20:02:20.730015 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:20 crc kubenswrapper[4183]: I0813 20:02:20.730523 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.122979 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.123459 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" containerID="cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" gracePeriod=15 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.123664 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92" gracePeriod=15 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.123708 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" gracePeriod=15 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.123747 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325" gracePeriod=15 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.123873 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2" gracePeriod=15 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127333 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127486 4183 topology_manager.go:215] "Topology Admit Handler" podUID="48128e8d38b5cbcd2691da698bd9cac3" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127694 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2f155735-a9be-4621-a5f2-5ab4b6957acd" containerName="pruner" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127710 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f155735-a9be-4621-a5f2-5ab4b6957acd" containerName="pruner" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127721 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="setup" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127729 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="setup" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127742 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127750 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127763 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127770 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127864 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127876 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127900 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127912 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127925 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127932 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127943 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127952 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127962 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-insecure-readyz" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127970 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-insecure-readyz" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127979 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127987 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127996 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128003 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128154 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128165 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128178 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128187 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128197 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f155735-a9be-4621-a5f2-5ab4b6957acd" containerName="pruner" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128208 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128220 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128228 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128235 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128246 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-insecure-readyz" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.128466 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128480 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.128492 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128500 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128688 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128704 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.133575 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.133659 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bf055e84f32193b9c1c21b0c34a61f01" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.134289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.158390 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.158498 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.158611 4183 topology_manager.go:215] "Topology Admit Handler" podUID="92b2a8634cfe8a21cffcc98cc8c87160" podNamespace="openshift-kube-scheduler" podName="openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.159084 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159105 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.159116 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-recovery-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159124 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-recovery-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.159135 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="wait-for-host-port" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159142 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="wait-for-host-port" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.159158 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159170 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159295 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-recovery-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159313 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159323 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.160382 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler" containerID="cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52" gracePeriod=30 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.160501 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-recovery-controller" containerID="cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e" gracePeriod=30 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.160637 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-cert-syncer" containerID="cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" gracePeriod=30 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304205 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304341 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304373 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304395 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304438 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304469 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304508 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304547 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304579 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304617 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406097 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406182 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406209 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406246 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406269 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406296 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406324 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406348 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406385 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406426 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407344 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407523 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407600 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407640 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407669 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407700 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407732 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407761 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.976484 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_631cdb37fbb54e809ecc5e719aebd371/kube-scheduler-cert-syncer/0.log" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.979513 4183 generic.go:334] "Generic (PLEG): container finished" podID="631cdb37fbb54e809ecc5e719aebd371" containerID="e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" exitCode=2 Aug 13 20:02:22 crc kubenswrapper[4183]: I0813 20:02:22.004564 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 20:02:22 crc kubenswrapper[4183]: I0813 20:02:22.007470 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-cert-syncer/0.log" Aug 13 20:02:22 crc kubenswrapper[4183]: I0813 20:02:22.011128 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92" exitCode=0 Aug 13 20:02:22 crc kubenswrapper[4183]: I0813 20:02:22.011262 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" exitCode=0 Aug 13 20:02:22 crc kubenswrapper[4183]: I0813 20:02:22.011369 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2" exitCode=2 Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.023749 4183 generic.go:334] "Generic (PLEG): container finished" podID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" containerID="7be671fc50422e885dbb1fec6a6c30037cba5481e39185347522a94f177d9763" exitCode=0 Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.023924 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ad657a4-8b02-4373-8d0d-b0e25345dc90","Type":"ContainerDied","Data":"7be671fc50422e885dbb1fec6a6c30037cba5481e39185347522a94f177d9763"} Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.029132 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_631cdb37fbb54e809ecc5e719aebd371/kube-scheduler-cert-syncer/0.log" Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.031121 4183 generic.go:334] "Generic (PLEG): container finished" podID="631cdb37fbb54e809ecc5e719aebd371" containerID="7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e" exitCode=0 Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.036474 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.039619 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-cert-syncer/0.log" Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.040716 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325" exitCode=0 Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.050510 4183 generic.go:334] "Generic (PLEG): container finished" podID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" containerID="c790588ca0e77460d01591ce4be738641e9b039fdf1cb3c3fdd37a9243b55f83" exitCode=0 Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.050563 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-7-crc" event={"ID":"b57cce81-8ea0-4c4d-aae1-ee024d201c15","Type":"ContainerDied","Data":"c790588ca0e77460d01591ce4be738641e9b039fdf1cb3c3fdd37a9243b55f83"} Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.058308 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_631cdb37fbb54e809ecc5e719aebd371/kube-scheduler-cert-syncer/0.log" Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.064708 4183 generic.go:334] "Generic (PLEG): container finished" podID="631cdb37fbb54e809ecc5e719aebd371" containerID="51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52" exitCode=0 Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.503045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.507980 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.871920 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.872057 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:25 crc kubenswrapper[4183]: I0813 20:02:25.054230 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:25 crc kubenswrapper[4183]: I0813 20:02:25.054353 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:27 crc kubenswrapper[4183]: I0813 20:02:27.616315 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:02:27 crc kubenswrapper[4183]: I0813 20:02:27.616946 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:02:29 crc kubenswrapper[4183]: I0813 20:02:29.539666 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:29 crc kubenswrapper[4183]: I0813 20:02:29.539760 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:30 crc kubenswrapper[4183]: I0813 20:02:30.729509 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: i/o timeout" start-of-body= Aug 13 20:02:30 crc kubenswrapper[4183]: I0813 20:02:30.730239 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: i/o timeout" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.144042 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.429055 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-cert-syncer/0.log" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.431535 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" exitCode=0 Aug 13 20:02:31 crc kubenswrapper[4183]: E0813 20:02:31.866061 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.891324 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.894905 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.897310 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.898116 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.898976 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.902627 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.912313 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.919507 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.923328 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.925575 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.927066 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.937900 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.939973 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.942267 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.945280 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.949082 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.953861 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.954953 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.956319 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.959661 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.960501 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.962225 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.963159 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.964075 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.967216 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.969357 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.974407 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.976307 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.978201 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.979062 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.981029 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.983325 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.984602 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.985322 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.986095 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.986957 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.988177 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.271926 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.272938 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.274592 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.275658 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.276688 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: I0813 20:02:32.276739 4183 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.277635 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="200ms" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.480426 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="400ms" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.886290 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="800ms" Aug 13 20:02:33 crc kubenswrapper[4183]: E0813 20:02:33.131135 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.474262 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.log" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.474347 4183 generic.go:334] "Generic (PLEG): container finished" podID="79050916-d488-4806-b556-1b0078b31e53" containerID="f3271fa1efff9a0885965f0ea8ca31234ba9caefd85007392c549bd273427721" exitCode=1 Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.474548 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-crc" event={"ID":"79050916-d488-4806-b556-1b0078b31e53","Type":"ContainerDied","Data":"f3271fa1efff9a0885965f0ea8ca31234ba9caefd85007392c549bd273427721"} Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.476760 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.478490 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.479453 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.480291 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.481111 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.483928 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.485227 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.485599 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/0.log" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.486055 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.487543 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.488271 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="98e20994b78d70c7d9739afcbef1576151aa009516cab8609a2c74b997bfed1a" exitCode=255 Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.488325 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"98e20994b78d70c7d9739afcbef1576151aa009516cab8609a2c74b997bfed1a"} Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.488552 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.491152 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.491753 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.492511 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.493378 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.630867 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.635107 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.637395 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.640083 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.640704 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.642214 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.643737 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.644623 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.645209 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.649266 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.649862 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.650680 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.651510 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.653001 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.654423 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.656190 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.658048 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.659026 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.659894 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.660903 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.661440 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.663152 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.665048 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.665610 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.666446 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.667012 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.667883 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.669062 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.669996 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.670695 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.672064 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.673439 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.675534 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: E0813 20:02:33.693056 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="1.6s" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.776134 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.779418 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.780020 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.780612 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.781261 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.782027 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.782951 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.784489 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.785098 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.785578 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.786645 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.787280 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.787737 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.788288 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.788949 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.789443 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.790632 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.814858 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817226 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca\") pod \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817278 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca\") pod \"00d32440-4cce-4609-96f3-51ac94480aab\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817336 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert\") pod \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817359 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles\") pod \"00d32440-4cce-4609-96f3-51ac94480aab\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817431 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hdnq\" (UniqueName: \"kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq\") pod \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817456 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert\") pod \"00d32440-4cce-4609-96f3-51ac94480aab\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817484 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqzj5\" (UniqueName: \"kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5\") pod \"00d32440-4cce-4609-96f3-51ac94480aab\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817511 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config\") pod \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817533 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config\") pod \"00d32440-4cce-4609-96f3-51ac94480aab\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.823308 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "00d32440-4cce-4609-96f3-51ac94480aab" (UID: "00d32440-4cce-4609-96f3-51ac94480aab"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.824086 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca" (OuterVolumeSpecName: "client-ca") pod "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" (UID: "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.824283 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca" (OuterVolumeSpecName: "client-ca") pod "00d32440-4cce-4609-96f3-51ac94480aab" (UID: "00d32440-4cce-4609-96f3-51ac94480aab"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.829595 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.831321 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.831916 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.832096 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config" (OuterVolumeSpecName: "config") pod "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" (UID: "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.839907 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.842529 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.849603 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5" (OuterVolumeSpecName: "kube-api-access-hqzj5") pod "00d32440-4cce-4609-96f3-51ac94480aab" (UID: "00d32440-4cce-4609-96f3-51ac94480aab"). InnerVolumeSpecName "kube-api-access-hqzj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.849899 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.849943 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.849964 4183 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.850010 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.853018 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" (UID: "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.855311 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config" (OuterVolumeSpecName: "config") pod "00d32440-4cce-4609-96f3-51ac94480aab" (UID: "00d32440-4cce-4609-96f3-51ac94480aab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.857175 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "00d32440-4cce-4609-96f3-51ac94480aab" (UID: "00d32440-4cce-4609-96f3-51ac94480aab"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.857435 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq" (OuterVolumeSpecName: "kube-api-access-5hdnq") pod "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" (UID: "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d"). InnerVolumeSpecName "kube-api-access-5hdnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.854495 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.858698 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.859277 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.860308 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.861870 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.867239 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.869475 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.877766 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.878742 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.880544 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.881319 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.952876 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock\") pod \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.952928 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock\") pod \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953030 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir\") pod \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953060 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access\") pod \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953117 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir\") pod \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953202 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access\") pod \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953432 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5hdnq\" (UniqueName: \"kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953448 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953461 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hqzj5\" (UniqueName: \"kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953475 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953486 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953865 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2ad657a4-8b02-4373-8d0d-b0e25345dc90" (UID: "2ad657a4-8b02-4373-8d0d-b0e25345dc90"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953916 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock" (OuterVolumeSpecName: "var-lock") pod "b57cce81-8ea0-4c4d-aae1-ee024d201c15" (UID: "b57cce81-8ea0-4c4d-aae1-ee024d201c15"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.954018 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock" (OuterVolumeSpecName: "var-lock") pod "2ad657a4-8b02-4373-8d0d-b0e25345dc90" (UID: "2ad657a4-8b02-4373-8d0d-b0e25345dc90"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.954018 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b57cce81-8ea0-4c4d-aae1-ee024d201c15" (UID: "b57cce81-8ea0-4c4d-aae1-ee024d201c15"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.962464 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b57cce81-8ea0-4c4d-aae1-ee024d201c15" (UID: "b57cce81-8ea0-4c4d-aae1-ee024d201c15"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.965156 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2ad657a4-8b02-4373-8d0d-b0e25345dc90" (UID: "2ad657a4-8b02-4373-8d0d-b0e25345dc90"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054315 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054364 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054379 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054393 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054406 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054418 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.496521 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ad657a4-8b02-4373-8d0d-b0e25345dc90","Type":"ContainerDied","Data":"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8"} Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.496557 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.496587 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.497818 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.498384 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.499163 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.500878 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.501436 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.502971 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.504043 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.504689 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" event={"ID":"00d32440-4cce-4609-96f3-51ac94480aab","Type":"ContainerDied","Data":"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9"} Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.504911 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.510569 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.511628 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.512494 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.513769 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.515004 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.515683 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.515687 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-7-crc" event={"ID":"b57cce81-8ea0-4c4d-aae1-ee024d201c15","Type":"ContainerDied","Data":"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab"} Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.515875 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.517041 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.518184 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.519256 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.520329 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.521510 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.522740 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.522921 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.523083 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" event={"ID":"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d","Type":"ContainerDied","Data":"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e"} Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.523679 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.524237 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.525267 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.533218 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.535188 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.537986 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.538638 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.539522 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.540650 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.541552 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.542377 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.543332 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.546395 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.547282 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.548264 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.549312 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.550070 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.550576 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.551271 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.553470 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.554170 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.555246 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.556157 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.556904 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.557767 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.564338 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.567869 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.568709 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.569440 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.570700 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.571439 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.572174 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.573967 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.576134 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.577151 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.577686 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.578274 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.578869 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.579466 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.580407 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.583300 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.584394 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.585512 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.587040 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.587641 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.588412 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.871918 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.872067 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.956115 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.956951 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.957575 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.958710 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.959960 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.960004 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.217194 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.218923 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.219565 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.221954 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.223049 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.224121 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.224713 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.225338 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.226106 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.227234 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.228098 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.229299 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.230995 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.231916 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.232540 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.233328 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: E0813 20:02:35.295244 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="3.2s" Aug 13 20:02:38 crc kubenswrapper[4183]: E0813 20:02:38.497532 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="6.4s" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.539274 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.539381 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.971048 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_631cdb37fbb54e809ecc5e719aebd371/kube-scheduler-cert-syncer/0.log" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.976426 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.980409 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.983726 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.986091 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.993431 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.996708 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.996719 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.999005 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.005357 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.009100 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-mtx25_23eb88d6-6aea-4542-a2b9-8f3fd106b4ab/openshift-apiserver/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.009423 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.012959 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-cert-syncer/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.013871 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.014300 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.015256 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.016421 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.017243 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.017766 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.020040 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.021231 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.023635 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84fccc7b6-mkncc_b233d916-bfe3-4ae5-ae39-6b574d1aa05e/console/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.023942 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.024124 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.025519 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.029754 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.031242 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.032249 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.033030 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.034299 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.034354 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.034382 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.035124 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.036126 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.036459 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.037488 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.038454 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.039382 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.040466 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.041496 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.042642 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.043611 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.044625 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.045613 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.047488 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.049417 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.050515 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.051643 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.053272 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.057935 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.061766 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.062904 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.063535 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.064534 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.066270 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.067941 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.068702 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.070618 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.071518 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.073352 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.075716 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.077205 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.079158 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.084023 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.086202 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.088068 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.089629 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.090453 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir\") pod \"631cdb37fbb54e809ecc5e719aebd371\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.090596 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir\") pod \"631cdb37fbb54e809ecc5e719aebd371\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.090899 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "631cdb37fbb54e809ecc5e719aebd371" (UID: "631cdb37fbb54e809ecc5e719aebd371"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.090942 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "631cdb37fbb54e809ecc5e719aebd371" (UID: "631cdb37fbb54e809ecc5e719aebd371"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.092038 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.093259 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.093311 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.093608 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.193911 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.193988 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194026 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194058 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194094 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194121 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access\") pod \"79050916-d488-4806-b556-1b0078b31e53\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194161 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194187 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194206 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194228 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194249 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir\") pod \"53c1db1508241fbac1bedf9130341ffe\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194277 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194297 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir\") pod \"79050916-d488-4806-b556-1b0078b31e53\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194324 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194346 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194382 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194409 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194436 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir\") pod \"53c1db1508241fbac1bedf9130341ffe\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194711 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194747 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194821 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194884 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f9ss\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194926 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194946 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir\") pod \"53c1db1508241fbac1bedf9130341ffe\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194967 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194991 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195019 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195045 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195075 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195096 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195119 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195148 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock\") pod \"79050916-d488-4806-b556-1b0078b31e53\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195296 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock" (OuterVolumeSpecName: "var-lock") pod "79050916-d488-4806-b556-1b0078b31e53" (UID: "79050916-d488-4806-b556-1b0078b31e53"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195599 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195961 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config" (OuterVolumeSpecName: "config") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.196677 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.196746 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.197177 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.197289 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit" (OuterVolumeSpecName: "audit") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.197696 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "53c1db1508241fbac1bedf9130341ffe" (UID: "53c1db1508241fbac1bedf9130341ffe"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.198116 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "53c1db1508241fbac1bedf9130341ffe" (UID: "53c1db1508241fbac1bedf9130341ffe"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.198903 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.199238 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.199301 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "53c1db1508241fbac1bedf9130341ffe" (UID: "53c1db1508241fbac1bedf9130341ffe"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.199638 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.199721 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca" (OuterVolumeSpecName: "service-ca") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.200026 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "79050916-d488-4806-b556-1b0078b31e53" (UID: "79050916-d488-4806-b556-1b0078b31e53"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.202489 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.204030 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config" (OuterVolumeSpecName: "console-config") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.208569 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9" (OuterVolumeSpecName: "kube-api-access-r8qj9") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "kube-api-access-r8qj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.218292 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.220721 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.221921 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.227524 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.227679 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.228713 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss" (OuterVolumeSpecName: "kube-api-access-4f9ss") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "kube-api-access-4f9ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.229019 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.229133 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.231737 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "79050916-d488-4806-b556-1b0078b31e53" (UID: "79050916-d488-4806-b556-1b0078b31e53"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.236227 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.237452 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (OuterVolumeSpecName: "registry-storage") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97". PluginName "kubernetes.io/csi", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.238634 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.239584 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.241981 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh" (OuterVolumeSpecName: "kube-api-access-lz9qh") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "kube-api-access-lz9qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297045 4183 reconciler_common.go:300] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297115 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297139 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297153 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297170 4183 reconciler_common.go:300] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297185 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297199 4183 reconciler_common.go:300] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297212 4183 reconciler_common.go:300] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297227 4183 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297240 4183 reconciler_common.go:300] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297254 4183 reconciler_common.go:300] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297271 4183 reconciler_common.go:300] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297288 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4f9ss\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297304 4183 reconciler_common.go:300] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297318 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297333 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297347 4183 reconciler_common.go:300] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297364 4183 reconciler_common.go:300] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297398 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297413 4183 reconciler_common.go:300] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297429 4183 reconciler_common.go:300] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297444 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297458 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297472 4183 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297485 4183 reconciler_common.go:300] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297501 4183 reconciler_common.go:300] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297515 4183 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297529 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297542 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297559 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297573 4183 reconciler_common.go:300] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.588367 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84fccc7b6-mkncc_b233d916-bfe3-4ae5-ae39-6b574d1aa05e/console/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.588554 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.588685 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fccc7b6-mkncc" event={"ID":"b233d916-bfe3-4ae5-ae39-6b574d1aa05e","Type":"ContainerDied","Data":"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f"} Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.591107 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.592722 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.593893 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.596348 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.598081 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.598716 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_631cdb37fbb54e809ecc5e719aebd371/kube-scheduler-cert-syncer/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.599917 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.602294 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.604509 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.605512 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.608720 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.613287 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.614356 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.615596 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.616744 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.617542 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.618533 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.624663 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-cert-syncer/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.626103 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.628269 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.629763 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.630956 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.632720 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.633709 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.634588 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.643673 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.644669 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.647267 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.649110 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.650116 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.650878 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.652045 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" event={"ID":"42b6a393-6194-4620-bf8f-7e4b6cbe5679","Type":"ContainerDied","Data":"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4"} Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.655957 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.656491 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.656635 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-crc" event={"ID":"79050916-d488-4806-b556-1b0078b31e53","Type":"ContainerDied","Data":"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc"} Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.656685 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.658394 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.661451 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.662678 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-mtx25_23eb88d6-6aea-4542-a2b9-8f3fd106b4ab/openshift-apiserver/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.662727 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.663485 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.664381 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.665472 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.666156 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.667619 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.677670 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.679546 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.681101 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.683452 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.684923 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.686295 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.687519 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.688643 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.690579 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.692375 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.695015 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.710178 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.715430 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.717448 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.720003 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.721741 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.722877 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.723600 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.724325 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.725055 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.725735 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.728397 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.731248 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.738267 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.740283 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.742713 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.743524 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.747326 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.748566 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.749716 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.754477 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.755827 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.756452 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.757134 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.757716 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.758331 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.759046 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.759607 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.760155 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.760650 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.761316 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.761945 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.762517 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.763554 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.764555 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.765964 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.767552 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.770117 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:41 crc kubenswrapper[4183]: I0813 20:02:41.220590 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" path="/var/lib/kubelet/pods/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab/volumes" Aug 13 20:02:41 crc kubenswrapper[4183]: I0813 20:02:41.223978 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53c1db1508241fbac1bedf9130341ffe" path="/var/lib/kubelet/pods/53c1db1508241fbac1bedf9130341ffe/volumes" Aug 13 20:02:41 crc kubenswrapper[4183]: I0813 20:02:41.228241 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="631cdb37fbb54e809ecc5e719aebd371" path="/var/lib/kubelet/pods/631cdb37fbb54e809ecc5e719aebd371/volumes" Aug 13 20:02:42 crc kubenswrapper[4183]: I0813 20:02:42.615716 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:42 crc kubenswrapper[4183]: I0813 20:02:42.615907 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:43 crc kubenswrapper[4183]: E0813 20:02:43.133995 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:02:44 crc kubenswrapper[4183]: I0813 20:02:44.871378 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:44 crc kubenswrapper[4183]: I0813 20:02:44.872024 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:44 crc kubenswrapper[4183]: E0813 20:02:44.899307 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.134320 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.136079 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.137078 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.138687 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.140025 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.140097 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.213624 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.215267 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.218619 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.221977 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.222751 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.223611 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.224466 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.225551 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.226547 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.227405 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.229145 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.229898 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.230641 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.231662 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.232379 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.233232 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.234537 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.208317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.210866 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.211828 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.212948 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.213960 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.214838 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.216124 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.217011 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.218117 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.219027 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.220223 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.221319 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.222379 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.223687 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.225764 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.226823 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.227763 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.228582 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.229549 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.229580 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:02:46 crc kubenswrapper[4183]: E0813 20:02:46.230413 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.231018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.208426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.212466 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.213743 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.215143 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.216187 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.216927 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.218184 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.219320 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.220300 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.221351 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.223186 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.223737 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.224717 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.227581 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.228651 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.229338 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.229363 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.230133 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: E0813 20:02:49.230266 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.230940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.231155 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.232316 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.539512 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.539728 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:51 crc kubenswrapper[4183]: E0813 20:02:51.901264 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:02:53 crc kubenswrapper[4183]: E0813 20:02:53.137504 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707203 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Pending" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707366 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Pending" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707420 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707468 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707503 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" status="Pending" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707532 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.872090 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.872231 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.219044 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.220296 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.222133 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.223240 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.224009 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.224820 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.226944 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.228494 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.230011 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.231203 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.231769 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.232434 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.233162 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.234290 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.239215 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.240931 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.242716 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.244399 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.245681 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.336066 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.337683 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.340507 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.341480 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.342210 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.342229 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:02:58 crc kubenswrapper[4183]: E0813 20:02:58.904133 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:02:59 crc kubenswrapper[4183]: I0813 20:02:59.541340 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:59 crc kubenswrapper[4183]: I0813 20:02:59.541485 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:03 crc kubenswrapper[4183]: E0813 20:03:03.139563 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:04 crc kubenswrapper[4183]: I0813 20:03:04.871666 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:04 crc kubenswrapper[4183]: I0813 20:03:04.871934 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.210563 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.211517 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.212300 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.213267 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.214501 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.215662 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.217155 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.218226 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.219282 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.220280 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.221003 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.221764 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.222425 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.223649 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.224408 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.225165 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.226077 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.226826 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.227494 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.444295 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.445355 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.446196 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.447314 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.448427 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.448472 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.908710 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:09 crc kubenswrapper[4183]: I0813 20:03:09.540596 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:09 crc kubenswrapper[4183]: I0813 20:03:09.540878 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.947144 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.log" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.948536 4183 generic.go:334] "Generic (PLEG): container finished" podID="51a02bbf-2d40-4f84-868a-d399ea18a846" containerID="91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f" exitCode=1 Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.948600 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerDied","Data":"91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f"} Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.950159 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.950921 4183 scope.go:117] "RemoveContainer" containerID="91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.951127 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.952515 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.953682 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.954986 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.956447 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.957937 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.959092 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.961099 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.962411 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.962999 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.963527 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.964159 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.965230 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.966427 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.967529 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.970578 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.971704 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.972739 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.973474 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:12 crc kubenswrapper[4183]: E0813 20:03:12.913055 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:13 crc kubenswrapper[4183]: E0813 20:03:13.142309 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:14 crc kubenswrapper[4183]: I0813 20:03:14.873139 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:14 crc kubenswrapper[4183]: I0813 20:03:14.873303 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.214539 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.215659 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.217023 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.218560 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.219446 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.220423 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.221418 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.222705 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.223572 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.224623 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.225457 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.226282 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.227309 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.227988 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.228621 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.230261 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.235597 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.236756 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.238064 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.239153 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.649213 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.650252 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.651715 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.652691 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.653510 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.653526 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:19 crc kubenswrapper[4183]: I0813 20:03:19.540153 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:19 crc kubenswrapper[4183]: I0813 20:03:19.540272 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:19 crc kubenswrapper[4183]: E0813 20:03:19.915210 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:22 crc kubenswrapper[4183]: E0813 20:03:22.278613 4183 desired_state_of_world_populator.go:320] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" volumeName="registry-storage" Aug 13 20:03:23 crc kubenswrapper[4183]: E0813 20:03:23.144835 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:24 crc kubenswrapper[4183]: E0813 20:03:24.609959 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107" Aug 13 20:03:24 crc kubenswrapper[4183]: E0813 20:03:24.610356 4183 kuberuntime_manager.go:1262] container &Container{Name:console,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae,Command:[/opt/bridge/bin/bridge --public-dir=/opt/bridge/static --config=/var/console-config/console-config.yaml --service-ca-file=/var/service-ca/service-ca.crt --v=2],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{104857600 0} {} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:console-serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:console-oauth-config,ReadOnly:true,MountPath:/var/oauth-config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:console-config,ReadOnly:true,MountPath:/var/console-config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:service-ca,ReadOnly:true,MountPath:/var/service-ca,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:oauth-serving-cert,ReadOnly:true,MountPath:/var/oauth-serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2nz92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:1,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[sleep 25],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000590000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:30,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod console-644bb77b49-5x5xk_openshift-console(9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1): CreateContainerError: context deadline exceeded Aug 13 20:03:24 crc kubenswrapper[4183]: E0813 20:03:24.610451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Aug 13 20:03:24 crc kubenswrapper[4183]: I0813 20:03:24.872084 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:24 crc kubenswrapper[4183]: I0813 20:03:24.872210 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.047833 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.049084 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.050205 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.051015 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.051935 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.052827 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.053835 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.054432 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.055227 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.055950 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.056836 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.057551 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.058188 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.058752 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.059343 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.059963 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.060567 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.061288 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.061997 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.062546 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.063426 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.212956 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.214088 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.215231 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.216167 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.217076 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.218506 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.219432 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.220191 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.221977 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.226475 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.227704 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.229071 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.229894 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.230754 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.231917 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.232972 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.233637 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.234455 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.235441 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.236316 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.237150 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: E0813 20:03:25.422534 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61" Aug 13 20:03:25 crc kubenswrapper[4183]: E0813 20:03:25.422867 4183 kuberuntime_manager.go:1262] container &Container{Name:kube-scheduler-operator-container,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f,Command:[cluster-kube-scheduler-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.29.5,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_openshift-kube-scheduler-operator(71af81a9-7d43-49b2-9287-c375900aa905): CreateContainerError: context deadline exceeded Aug 13 20:03:25 crc kubenswrapper[4183]: E0813 20:03:25.422934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.008298 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.009152 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.009639 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.010249 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.010877 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.010914 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.052150 4183 scope.go:117] "RemoveContainer" containerID="e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.053483 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.055448 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.056550 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.057467 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.058261 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.059259 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.060223 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.061058 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.061933 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.062691 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.063579 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.064438 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.065181 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.065991 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.066908 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.067756 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.068570 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.069641 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.071225 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.072344 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.073650 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.074939 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.917366 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:27 crc kubenswrapper[4183]: E0813 20:03:27.231826 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239" Aug 13 20:03:27 crc kubenswrapper[4183]: E0813 20:03:27.232062 4183 kuberuntime_manager.go:1262] container &Container{Name:cluster-image-registry-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d,Command:[],Args:[--files=/var/run/configmaps/trusted-ca/tls-ca-bundle.pem --files=/etc/secrets/tls.crt --files=/etc/secrets/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:60000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:cluster-image-registry-operator,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8,ValueFrom:nil,},EnvVar{Name:IMAGE_PRUNER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce,ValueFrom:nil,},EnvVar{Name:AZURE_ENVIRONMENT_FILEPATH,Value:/tmp/azurestackcloud.json,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:trusted-ca,ReadOnly:false,MountPath:/var/run/configmaps/trusted-ca/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:image-registry-operator-tls,ReadOnly:false,MountPath:/etc/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9x6dp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000290000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cluster-image-registry-operator-7769bd8d7d-q5cvv_openshift-image-registry(b54e8941-2fc4-432a-9e51-39684df9089e): CreateContainerError: context deadline exceeded Aug 13 20:03:27 crc kubenswrapper[4183]: E0813 20:03:27.232162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-image-registry-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.067346 4183 scope.go:117] "RemoveContainer" containerID="dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.067614 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.068524 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.069916 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.070591 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.071345 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.072227 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.073426 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.074561 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.075600 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.076508 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.077389 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.078278 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.078943 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.079522 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.080234 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.080923 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.081510 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.082587 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.085724 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.088098 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.089261 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.089892 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:29 crc kubenswrapper[4183]: I0813 20:03:29.540064 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:29 crc kubenswrapper[4183]: I0813 20:03:29.540268 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:31 crc kubenswrapper[4183]: E0813 20:03:31.361546 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58" Aug 13 20:03:31 crc kubenswrapper[4183]: E0813 20:03:31.362141 4183 kuberuntime_manager.go:1262] container &Container{Name:openshift-apiserver-check-endpoints,Image:quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69,Command:[cluster-kube-apiserver-operator check-endpoints],Args:[--listen 0.0.0.0:17698 --namespace $(POD_NAMESPACE) --v 2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:check-endpoints,HostPort:0,ContainerPort:17698,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6j2kj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5): CreateContainerError: context deadline exceeded Aug 13 20:03:31 crc kubenswrapper[4183]: E0813 20:03:31.362199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.110567 4183 scope.go:117] "RemoveContainer" containerID="98e20994b78d70c7d9739afcbef1576151aa009516cab8609a2c74b997bfed1a" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.112827 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.114285 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.115013 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.115521 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.116366 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.117287 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.118542 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.119645 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.120606 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.121994 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.123110 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.125717 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.126669 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.127456 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.128200 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.128897 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.131474 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.132164 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.132706 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.134032 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.134677 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.135378 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.136175 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.804096 4183 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.804196 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b"} err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.804225 4183 scope.go:117] "RemoveContainer" containerID="c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5" Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.955395 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d" Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.955915 4183 kuberuntime_manager.go:1262] container &Container{Name:kube-controller-manager-operator,Image:quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f,Command:[cluster-kube-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f,ValueFrom:nil,},EnvVar{Name:CLUSTER_POLICY_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791,ValueFrom:nil,},EnvVar{Name:TOOLS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d6201c776053346ebce8f90c34797a7a7c05898008e17f3ba9673f5f14507b0,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.29.5,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-operator-6f6cb54958-rbddb_openshift-kube-controller-manager-operator(c1620f19-8aa3-45cf-931b-7ae0e5cd14cf): CreateContainerError: context deadline exceeded Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.956046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.957927 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71" Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.958531 4183 kuberuntime_manager.go:1262] container &Container{Name:openshift-config-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc,Command:[cluster-config-operator operator --operator-version=$(OPERATOR_IMAGE_VERSION) --authoritative-feature-gate-dir=/available-featuregates],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:available-featuregates,ReadOnly:false,MountPath:/available-featuregates,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8dcvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-config-operator-77658b5b66-dq5sc_openshift-config-operator(530553aa-0a1d-423e-8a22-f5eb4bdbb883): CreateContainerError: context deadline exceeded Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.958662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.139999 4183 scope.go:117] "RemoveContainer" containerID="de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.143820 4183 scope.go:117] "RemoveContainer" containerID="a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b" Aug 13 20:03:33 crc kubenswrapper[4183]: E0813 20:03:33.146579 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.146712 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.148155 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.152577 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.154245 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.156673 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.160183 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.163263 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.164587 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.165673 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.166966 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.167635 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.170179 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.171476 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.179570 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.180585 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.181576 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.182543 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.184442 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.185063 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.185589 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.186180 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.186691 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.187497 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.188824 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.192558 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.193641 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.195080 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.195730 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.197338 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.198623 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.200950 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.201666 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.202457 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.204072 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.205686 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.207140 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.208048 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.209113 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.209910 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.210405 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.211084 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.211709 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.212357 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.213086 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.213621 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.214235 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: E0813 20:03:33.919739 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:34 crc kubenswrapper[4183]: I0813 20:03:34.872349 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:34 crc kubenswrapper[4183]: I0813 20:03:34.872962 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.211419 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.212210 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.213376 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.214993 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.216000 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.217673 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.220219 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.223477 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.224896 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.226685 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.234192 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.235357 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.237326 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.239180 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.240549 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.241331 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.242495 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.243645 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.244446 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.245583 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.247018 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.247945 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.249169 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.665696 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.665987 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.666063 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.666083 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.259121 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.260281 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.261425 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.262254 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.263093 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.263115 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.932530 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.932730 4183 kuberuntime_manager.go:1262] container &Container{Name:openshift-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611,Command:[cluster-openshift-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:ROUTE_CONTROLLER_MANAGER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l8bxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator(0f394926-bdb9-425c-b36e-264d7fd34550): CreateContainerError: context deadline exceeded Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.933059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.189487 4183 scope.go:117] "RemoveContainer" containerID="30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.191418 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.192501 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.193612 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.197451 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.199123 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.200252 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.201146 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.201952 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.202673 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.203381 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.204067 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.204738 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.205462 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.206116 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.206760 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.207586 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.213151 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.213950 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.215625 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.216425 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.217261 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.218475 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.219215 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.221347 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:39 crc kubenswrapper[4183]: I0813 20:03:39.541123 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:39 crc kubenswrapper[4183]: I0813 20:03:39.541261 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:39 crc kubenswrapper[4183]: I0813 20:03:39.872486 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 20:03:40 crc kubenswrapper[4183]: E0813 20:03:40.238198 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4" Aug 13 20:03:40 crc kubenswrapper[4183]: E0813 20:03:40.238937 4183 kuberuntime_manager.go:1262] container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,Command:[/bin/bash -c #!/bin/bash Aug 13 20:03:40 crc kubenswrapper[4183]: set -o allexport Aug 13 20:03:40 crc kubenswrapper[4183]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Aug 13 20:03:40 crc kubenswrapper[4183]: source /etc/kubernetes/apiserver-url.env Aug 13 20:03:40 crc kubenswrapper[4183]: else Aug 13 20:03:40 crc kubenswrapper[4183]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Aug 13 20:03:40 crc kubenswrapper[4183]: exit 1 Aug 13 20:03:40 crc kubenswrapper[4183]: fi Aug 13 20:03:40 crc kubenswrapper[4183]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Aug 13 20:03:40 crc kubenswrapper[4183]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:SDN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ec002699d6fa111b93b08bda974586ae4018f4a52d1cbfd0995e6dc9c732151,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce3a9355a4497b51899867170943d34bbc2d2b7996d9a002c103797bd828d71b,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0791454224e2ec76fd43916220bd5ae55bf18f37f0cd571cb05c76e1d791453,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc5f4b6565d37bd875cdb42e95372128231218fb8741f640b09565d9dcea2cb1,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4sfhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-767c585db5-zd56b_openshift-network-operator(cc291782-27d2-4a74-af79-c7dcb31535d2): CreateContainerError: context deadline exceeded Aug 13 20:03:40 crc kubenswrapper[4183]: E0813 20:03:40.239006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-network-operator/network-operator-767c585db5-zd56b" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" Aug 13 20:03:40 crc kubenswrapper[4183]: E0813 20:03:40.921336 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.220155 4183 scope.go:117] "RemoveContainer" containerID="ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.221970 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.223472 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.224279 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.225119 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.225675 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.226532 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.227446 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.228282 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.229134 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.230321 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.231455 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.232479 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.233494 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.235245 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.236420 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.237317 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.238312 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.239691 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.241177 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.242645 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.243418 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.244192 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.244936 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.245929 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:43 crc kubenswrapper[4183]: E0813 20:03:43.150624 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:44 crc kubenswrapper[4183]: E0813 20:03:44.431158 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722" Aug 13 20:03:44 crc kubenswrapper[4183]: E0813 20:03:44.431657 4183 kuberuntime_manager.go:1262] container &Container{Name:service-ca-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d,Command:[service-ca-operator operator],Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d9vhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-ca-operator-546b4f8984-pwccz_openshift-service-ca-operator(6d67253e-2acd-4bc1-8185-793587da4f17): CreateContainerError: context deadline exceeded Aug 13 20:03:44 crc kubenswrapper[4183]: E0813 20:03:44.431702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 20:03:44 crc kubenswrapper[4183]: I0813 20:03:44.872013 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:44 crc kubenswrapper[4183]: I0813 20:03:44.872130 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.212536 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.214267 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.215631 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.217100 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.219131 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.220211 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.221167 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.222126 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.223070 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.223960 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.224621 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.225944 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.226706 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.227767 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.229005 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.230031 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.231325 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.233490 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.234690 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.235763 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.236752 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.237925 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.238925 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.239881 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.245310 4183 scope.go:117] "RemoveContainer" containerID="de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.245885 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.247417 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.248220 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.249427 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.250417 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.251017 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.251583 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.252204 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.252927 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.253378 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.254047 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.256380 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.257620 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.258610 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.259600 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.260899 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.261555 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.262370 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.263113 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.263691 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.265312 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.266454 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.267516 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.268704 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.269974 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.647315 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.648097 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.648578 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.649118 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.649679 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.649721 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:47 crc kubenswrapper[4183]: E0813 20:03:47.924557 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:49 crc kubenswrapper[4183]: I0813 20:03:49.540076 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:49 crc kubenswrapper[4183]: I0813 20:03:49.540191 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.547620 4183 scope.go:117] "RemoveContainer" containerID="d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.815311 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 20:03:51 crc kubenswrapper[4183]: E0813 20:03:51.818337 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\": container with ID starting with 42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf not found: ID does not exist" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.818414 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf"} err="failed to get container status \"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\": rpc error: code = NotFound desc = could not find container \"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\": container with ID starting with 42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf not found: ID does not exist" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.818438 4183 scope.go:117] "RemoveContainer" containerID="71a0cdc384f9d93ad108bee372da2b3e7dddb9b98c65c36f3ddbf584a54fd830" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.908296 4183 scope.go:117] "RemoveContainer" containerID="51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.973248 4183 scope.go:117] "RemoveContainer" containerID="417399fd591cd0cade9e86c96a7f4a9443d365dc57f627f00e02594fd8957bf3" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.999520 4183 scope.go:117] "RemoveContainer" containerID="7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.136716 4183 scope.go:117] "RemoveContainer" containerID="a4a4a30f20f748c27de48f589b297456dbde26c9c06b9c1e843ce69a376e85a9" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.251946 4183 scope.go:117] "RemoveContainer" containerID="2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.332974 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.334677 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager/0.log" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.334969 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.347377 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.352028 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.352959 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.353908 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.354585 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.355237 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.355963 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.357058 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.359662 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.360466 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.361210 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.362273 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.363085 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.374354 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.377143 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.379331 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.381240 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.382386 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.383532 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.384450 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.385304 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.386031 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.386926 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.388027 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.389206 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.591196 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"9b7878320974e3985f5732deb5170463e1dafc9265287376679a29ea7923e84c"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.594312 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.594452 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.594543 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.594627 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.595310 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.596506 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.597199 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.598261 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.599060 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.599094 4183 scope.go:117] "RemoveContainer" containerID="7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.599826 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.601014 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: E0813 20:03:52.601085 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\": container with ID starting with 7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e not found: ID does not exist" containerID="7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.601130 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e"} err="failed to get container status \"7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\": rpc error: code = NotFound desc = could not find container \"7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\": container with ID starting with 7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e not found: ID does not exist" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.601144 4183 scope.go:117] "RemoveContainer" containerID="e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.604198 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.605258 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.606558 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.608023 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.609312 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.610283 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.611159 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.611766 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.612431 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.613495 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.615178 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.616312 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.618643 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.625019 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.626334 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.628113 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.631859 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.645070 4183 scope.go:117] "RemoveContainer" containerID="e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.650987 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/0.log" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.651253 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.655764 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.657134 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.657711 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.658417 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.659137 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.659921 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.663326 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.667993 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.670400 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.673032 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.675751 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.680620 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.689708 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.691103 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.694349 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.699256 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.703504 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.705175 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.719389 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.724042 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.730489 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.737357 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.740380 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.746116 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.747167 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.815913 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"92b2a8634cfe8a21cffcc98cc8c87160","Type":"ContainerStarted","Data":"a3aeac3b3f0abd9616c32591e8c03ee04ad93d9eaa1a57f5f009d1e5534dc9bf"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.836479 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"4df62f5cb9c66f562c10ea184889e69acedbf4f895667310c68697db48fd553b"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.846168 4183 scope.go:117] "RemoveContainer" containerID="51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52" Aug 13 20:03:52 crc kubenswrapper[4183]: E0813 20:03:52.847149 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\": container with ID starting with 51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52 not found: ID does not exist" containerID="51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.847236 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52"} err="failed to get container status \"51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\": rpc error: code = NotFound desc = could not find container \"51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\": container with ID starting with 51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52 not found: ID does not exist" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.847256 4183 scope.go:117] "RemoveContainer" containerID="d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624" Aug 13 20:03:52 crc kubenswrapper[4183]: E0813 20:03:52.847353 4183 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-scheduler-cert-syncer_openshift-kube-scheduler-crc_openshift-kube-scheduler_631cdb37fbb54e809ecc5e719aebd371_0 in pod sandbox 970bf8339a8e8001b60c124abd60c2b2381265f54d5bcdb460515789626b6ba9 from index: no such id: 'e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff'" containerID="e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" Aug 13 20:03:52 crc kubenswrapper[4183]: E0813 20:03:52.847399 4183 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-scheduler-cert-syncer_openshift-kube-scheduler-crc_openshift-kube-scheduler_631cdb37fbb54e809ecc5e719aebd371_0 in pod sandbox 970bf8339a8e8001b60c124abd60c2b2381265f54d5bcdb460515789626b6ba9 from index: no such id: 'e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff'" containerID="e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.847419 4183 scope.go:117] "RemoveContainer" containerID="d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.865429 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"bf055e84f32193b9c1c21b0c34a61f01","Type":"ContainerStarted","Data":"da0d5a4673db72bf057aaca9add937d2dd33d15edccefb4817f17da3759c2927"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.884076 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.log" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.923425 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.924626 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.925393 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.926622 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.930474 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.931532 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.932827 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.933481 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.934358 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.938533 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.939640 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.941010 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.945088 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.946475 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.956057 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.956738 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.962403 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.970510 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.972115 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.975619 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.978427 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.996070 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.997568 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.001222 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.007673 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.153513 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.161396 4183 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_wait-for-host-port_openshift-kube-scheduler-crc_openshift-kube-scheduler_631cdb37fbb54e809ecc5e719aebd371_0 in pod sandbox 970bf8339a8e8001b60c124abd60c2b2381265f54d5bcdb460515789626b6ba9 from index: no such id: 'd1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624'" containerID="d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.161515 4183 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_wait-for-host-port_openshift-kube-scheduler-crc_openshift-kube-scheduler_631cdb37fbb54e809ecc5e719aebd371_0 in pod sandbox 970bf8339a8e8001b60c124abd60c2b2381265f54d5bcdb460515789626b6ba9 from index: no such id: 'd1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624'" containerID="d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.161545 4183 scope.go:117] "RemoveContainer" containerID="138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.161685 4183 scope.go:117] "RemoveContainer" containerID="d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.165607 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\": container with ID starting with d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92 not found: ID does not exist" containerID="d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.165661 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92"} err="failed to get container status \"d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\": rpc error: code = NotFound desc = could not find container \"d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\": container with ID starting with d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92 not found: ID does not exist" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.165680 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.166373 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf"} err="failed to get container status \"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\": rpc error: code = NotFound desc = could not find container \"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\": container with ID starting with 42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf not found: ID does not exist" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.166417 4183 scope.go:117] "RemoveContainer" containerID="fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.388109 4183 scope.go:117] "RemoveContainer" containerID="f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.489002 4183 scope.go:117] "RemoveContainer" containerID="138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.490441 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\": container with ID starting with 138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325 not found: ID does not exist" containerID="138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.490514 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325"} err="failed to get container status \"138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\": rpc error: code = NotFound desc = could not find container \"138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\": container with ID starting with 138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325 not found: ID does not exist" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.490537 4183 scope.go:117] "RemoveContainer" containerID="2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.492177 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\": container with ID starting with 2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2 not found: ID does not exist" containerID="2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.492257 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2"} err="failed to get container status \"2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\": rpc error: code = NotFound desc = could not find container \"2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\": container with ID starting with 2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2 not found: ID does not exist" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.492291 4183 scope.go:117] "RemoveContainer" containerID="7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.554953 4183 scope.go:117] "RemoveContainer" containerID="fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.558249 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\": container with ID starting with fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a not found: ID does not exist" containerID="fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.558305 4183 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\": rpc error: code = NotFound desc = could not find container \"fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\": container with ID starting with fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a not found: ID does not exist" containerID="fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.558335 4183 scope.go:117] "RemoveContainer" containerID="7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.900996 4183 scope.go:117] "RemoveContainer" containerID="f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.901228 4183 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-apiserver_kube-apiserver-crc_openshift-kube-apiserver_53c1db1508241fbac1bedf9130341ffe_0 in pod sandbox e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83 from index: no such id: '7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5'" containerID="7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.901273 4183 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-apiserver_kube-apiserver-crc_openshift-kube-apiserver_53c1db1508241fbac1bedf9130341ffe_0 in pod sandbox e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83 from index: no such id: '7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5'" containerID="7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.914540 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\": container with ID starting with f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480 not found: ID does not exist" containerID="f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.914650 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480"} err="failed to get container status \"f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\": rpc error: code = NotFound desc = could not find container \"f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\": container with ID starting with f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480 not found: ID does not exist" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.914676 4183 scope.go:117] "RemoveContainer" containerID="32fd955a56de5925978ca9c74fd5477e1123ae91904669c797c57e09bb337d84" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.985211 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerStarted","Data":"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24"} Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.989633 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.990768 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.992256 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.993070 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.994202 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.995251 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.997368 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.998538 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.999235 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.999727 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.000364 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.000917 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.001581 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.005208 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.006195 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.006867 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.007503 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.008135 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.010212 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.012267 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.013308 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.014224 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.015561 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.017215 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.018054 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.018769 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.034042 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.050142 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerStarted","Data":"3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.067978 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"bf055e84f32193b9c1c21b0c34a61f01","Type":"ContainerStarted","Data":"15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.070249 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.071425 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.073964 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.077460 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.078588 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.081030 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.082476 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.084958 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.log" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.086267 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"e302077a679b703dfa8553f1ea474302e86cc72bc23b53926bdc62ce33df0f64"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.088211 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.094913 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.097251 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.102324 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.102620 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerStarted","Data":"c3dbff7f4c3117da13658584d3a507d50302df8be0d31802f8e4e5b93ddec694"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.103968 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.106639 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.113311 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.116123 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.118679 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.123027 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.124242 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.125181 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.125924 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.126600 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.128062 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.129082 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.129903 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.135059 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.136239 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerStarted","Data":"319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.138270 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.152304 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.153725 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.155006 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.156625 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.159271 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.164315 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.165074 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.165661 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.166382 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.167048 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.172278 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.176069 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.176915 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.181046 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.183126 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.189981 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.193940 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.198031 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.199125 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.200183 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.205213 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.211008 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.211825 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.222035 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.222627 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.230992 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.233069 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.233933 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.234623 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.235869 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.242517 4183 status_manager.go:853] "Failed to get status for pod" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8s8pc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.243296 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.245137 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.246348 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.249618 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.250358 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.251385 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.252168 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.252716 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.253704 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.254575 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.255223 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.261472 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.281818 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.287989 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.289834 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.300725 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.321118 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.343664 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.361538 4183 scope.go:117] "RemoveContainer" containerID="850160bdc6ea5ea83ea4c13388d6776a10113289f49f21b1ead74f152e5a1512" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.368418 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.382082 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.408899 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.425935 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.431358 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/0.log" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.436109 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.437653 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.439496 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.441269 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.475968 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.481505 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.502828 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.525338 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.629285 4183 scope.go:117] "RemoveContainer" containerID="a9c5c60859fe5965d3e56b1f36415e36c4ebccf094bcf5a836013b9db4262143" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708140 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708286 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" status="Running" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708320 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708378 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Pending" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708414 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708451 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Pending" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.738941 4183 scope.go:117] "RemoveContainer" containerID="b52df8e62a367664028244f096d775f6f9e6f572cd730e4e147620381f6880c3" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.875372 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.875453 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.875544 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.875464 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: E0813 20:03:54.928188 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:54 crc kubenswrapper[4183]: E0813 20:03:54.960376 4183 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92b2a8634cfe8a21cffcc98cc8c87160.slice/crio-dc3b34e8b871f3bd864f0c456c6ee0a0f7a97f171f4c0c5d20a5a451b26196e9.scope\": RecentStats: unable to find data in memory cache]" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.214351 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.218089 4183 status_manager.go:853] "Failed to get status for pod" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8s8pc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.219195 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.219961 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.224599 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.228904 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.229919 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.231029 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.231920 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.235357 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.236962 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.238438 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.240074 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.241611 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.245553 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.249464 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.251421 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.254160 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.255417 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.256743 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.257566 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.260917 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.264107 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.266770 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.277921 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.279402 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.283013 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.285316 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.290481 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.620454 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.621742 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/0.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.622475 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" exitCode=255 Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.622574 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.622611 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"1a09e11981ae9c63bb4ca1d27de2b7a914e1b4ad8edd3d0d73f1ad5239373316"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.622633 4183 scope.go:117] "RemoveContainer" containerID="98e20994b78d70c7d9739afcbef1576151aa009516cab8609a2c74b997bfed1a" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.627053 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:55.628078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.629064 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.629596 4183 generic.go:334] "Generic (PLEG): container finished" podID="92b2a8634cfe8a21cffcc98cc8c87160" containerID="dc3b34e8b871f3bd864f0c456c6ee0a0f7a97f171f4c0c5d20a5a451b26196e9" exitCode=0 Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.629704 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"92b2a8634cfe8a21cffcc98cc8c87160","Type":"ContainerDied","Data":"dc3b34e8b871f3bd864f0c456c6ee0a0f7a97f171f4c0c5d20a5a451b26196e9"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.630399 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.630462 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.632367 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:55.632479 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.633693 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.648340 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerStarted","Data":"5dfab3908e38ec4c78ee676439e402432e22c1d28963eb816627f094e1f7ffed"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.650425 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.652757 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.655106 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.656986 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.658549 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba" exitCode=0 Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.658644 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerDied","Data":"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.660075 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.660097 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.663572 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:55.663943 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.664898 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.665145 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.665404 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.665450 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.665467 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.665996 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerStarted","Data":"a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.666515 4183 status_manager.go:853] "Failed to get status for pod" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8s8pc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.667399 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.671709 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.676322 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.676514 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerStarted","Data":"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.681983 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.682718 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.683350 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.683933 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.684530 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.685091 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.685546 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.686263 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.686545 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.686586 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.686900 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.687592 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.693863 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.694675 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.718511 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.755648 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.758261 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.759512 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.761201 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.765354 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.785133 4183 status_manager.go:853] "Failed to get status for pod" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" pod="openshift-marketplace/community-operators-8jhz6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8jhz6\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.801950 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.834459 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.842708 4183 status_manager.go:853] "Failed to get status for pod" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8s8pc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.869897 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.901900 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.907735 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.922435 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.942988 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.963378 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.983100 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.004700 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.024106 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.047217 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.061301 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.081932 4183 status_manager.go:853] "Failed to get status for pod" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rmwfn\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.101674 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.122544 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.157367 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.167833 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.181304 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.201007 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.221447 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.246117 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.262127 4183 status_manager.go:853] "Failed to get status for pod" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" pod="openshift-marketplace/certified-operators-7287f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7287f\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.286681 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.301302 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.321179 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.340915 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.696466 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/1.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.697332 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.697924 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="1a09e11981ae9c63bb4ca1d27de2b7a914e1b4ad8edd3d0d73f1ad5239373316" exitCode=255 Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.697963 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"1a09e11981ae9c63bb4ca1d27de2b7a914e1b4ad8edd3d0d73f1ad5239373316"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.698501 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.698518 4183 scope.go:117] "RemoveContainer" containerID="1a09e11981ae9c63bb4ca1d27de2b7a914e1b4ad8edd3d0d73f1ad5239373316" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:57.706332 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:57.715764 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"92b2a8634cfe8a21cffcc98cc8c87160","Type":"ContainerStarted","Data":"5b04274f5ebeb54ec142f28db67158b3f20014bf0046505512a20f576eb7c4b4"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:57.723053 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:57.726435 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.737374 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.739719 4183 generic.go:334] "Generic (PLEG): container finished" podID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerID="7b2c6478f4940bab46ab22fb59aeffb640ce0f0e8ccd61b80c50a3afdd842157" exitCode=0 Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.739832 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"7b2c6478f4940bab46ab22fb59aeffb640ce0f0e8ccd61b80c50a3afdd842157"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.740447 4183 scope.go:117] "RemoveContainer" containerID="7b2c6478f4940bab46ab22fb59aeffb640ce0f0e8ccd61b80c50a3afdd842157" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.744316 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/1.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.745125 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.748107 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.748152 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:58.788129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.288123 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.288273 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.290115 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.290189 4183 status_manager.go:853] "Failed to get status for pod" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" pod="openshift-marketplace/redhat-operators-dcqzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dcqzh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.291275 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.292131 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.293145 4183 status_manager.go:853] "Failed to get status for pod" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-8b455464d-f9xdt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.293268 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.294218 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.295226 4183 status_manager.go:853] "Failed to get status for pod" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" pod="openshift-marketplace/community-operators-8jhz6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8jhz6\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.295645 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.295730 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.296617 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.297883 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.299107 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.301006 4183 status_manager.go:853] "Failed to get status for pod" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8s8pc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.301906 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.303484 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.305187 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.306082 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.308614 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.309539 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.312005 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.313185 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.314671 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.316158 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.320652 4183 status_manager.go:853] "Failed to get status for pod" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rmwfn\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.321893 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.322873 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.324685 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.327030 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.328459 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.329474 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.330380 4183 status_manager.go:853] "Failed to get status for pod" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" pod="openshift-marketplace/certified-operators-7287f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7287f\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.331342 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.332105 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.332755 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.333584 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.334273 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.334880 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.335981 4183 status_manager.go:853] "Failed to get status for pod" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" pod="openshift-marketplace/redhat-operators-dcqzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dcqzh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.337487 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.539176 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.539344 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.776658 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.777414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.778308 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"92b2a8634cfe8a21cffcc98cc8c87160","Type":"ContainerStarted","Data":"daf74224d04a5859b6f3ea7213d84dd41f91a9dfefadc077c041aabcb8247fdd"} Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.820526 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9"} Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.836446 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71"} Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.836953 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.839287 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.839373 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.868702 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/2.log" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.872256 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/1.log" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.873984 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.876957 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca" exitCode=255 Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.877027 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca"} Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.877070 4183 scope.go:117] "RemoveContainer" containerID="1a09e11981ae9c63bb4ca1d27de2b7a914e1b4ad8edd3d0d73f1ad5239373316" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.877941 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.877988 4183 scope.go:117] "RemoveContainer" containerID="807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca" Aug 13 20:04:00 crc kubenswrapper[4183]: E0813 20:04:00.878661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.912502 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83"} Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.918382 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/2.log" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.922374 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.952170 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.952209 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.952863 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"92b2a8634cfe8a21cffcc98cc8c87160","Type":"ContainerStarted","Data":"da6e49e577c89776d78e03c12b1aa711de8c3b6ceb252a9c05b51d38a6e6fd8a"} Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.952902 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.953193 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.953280 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.974963 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9"} Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.983911 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/1.log" Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.985984 4183 generic.go:334] "Generic (PLEG): container finished" podID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerID="b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71" exitCode=1 Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.986118 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71"} Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.986157 4183 scope.go:117] "RemoveContainer" containerID="7b2c6478f4940bab46ab22fb59aeffb640ce0f0e8ccd61b80c50a3afdd842157" Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.986735 4183 scope.go:117] "RemoveContainer" containerID="b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71" Aug 13 20:04:02 crc kubenswrapper[4183]: E0813 20:04:02.987548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:03 crc kubenswrapper[4183]: I0813 20:04:03.998006 4183 generic.go:334] "Generic (PLEG): container finished" podID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerID="0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc" exitCode=0 Aug 13 20:04:03 crc kubenswrapper[4183]: I0813 20:04:03.998105 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc"} Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.003442 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/1.log" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.003935 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.003971 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.004254 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.038070 4183 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.523272 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.524281 4183 scope.go:117] "RemoveContainer" containerID="b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71" Aug 13 20:04:04 crc kubenswrapper[4183]: E0813 20:04:04.524679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.871606 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.871700 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.871749 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.871952 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:06 crc kubenswrapper[4183]: I0813 20:04:06.232970 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:06 crc kubenswrapper[4183]: I0813 20:04:06.235698 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:06 crc kubenswrapper[4183]: I0813 20:04:06.247545 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:07 crc kubenswrapper[4183]: I0813 20:04:07.080683 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0"} Aug 13 20:04:07 crc kubenswrapper[4183]: I0813 20:04:07.086603 4183 generic.go:334] "Generic (PLEG): container finished" podID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerID="5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a" exitCode=0 Aug 13 20:04:07 crc kubenswrapper[4183]: I0813 20:04:07.086722 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerDied","Data":"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a"} Aug 13 20:04:07 crc kubenswrapper[4183]: I0813 20:04:07.090544 4183 generic.go:334] "Generic (PLEG): container finished" podID="bb917686-edfb-4158-86ad-6fce0abec64c" containerID="c3dbff7f4c3117da13658584d3a507d50302df8be0d31802f8e4e5b93ddec694" exitCode=0 Aug 13 20:04:07 crc kubenswrapper[4183]: I0813 20:04:07.090601 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerDied","Data":"c3dbff7f4c3117da13658584d3a507d50302df8be0d31802f8e4e5b93ddec694"} Aug 13 20:04:09 crc kubenswrapper[4183]: I0813 20:04:09.540223 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:09 crc kubenswrapper[4183]: I0813 20:04:09.542063 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:10 crc kubenswrapper[4183]: I0813 20:04:10.128627 4183 generic.go:334] "Generic (PLEG): container finished" podID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerID="a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff" exitCode=0 Aug 13 20:04:10 crc kubenswrapper[4183]: I0813 20:04:10.128731 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerDied","Data":"a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff"} Aug 13 20:04:10 crc kubenswrapper[4183]: I0813 20:04:10.139614 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerStarted","Data":"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467"} Aug 13 20:04:10 crc kubenswrapper[4183]: I0813 20:04:10.144463 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerStarted","Data":"844f180a492dff97326b5ea50f79dcbfc132e7edaccd1723d8997c38fb3bf568"} Aug 13 20:04:10 crc kubenswrapper[4183]: I0813 20:04:10.584765 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:04:12 crc kubenswrapper[4183]: I0813 20:04:12.167278 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerStarted","Data":"58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843"} Aug 13 20:04:13 crc kubenswrapper[4183]: I0813 20:04:13.463032 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.370971 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7287f" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.372425 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7287f" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.735468 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.737108 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.871953 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.872447 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.872692 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.871953 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.873120 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.873545 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.873658 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.876995 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"9b7878320974e3985f5732deb5170463e1dafc9265287376679a29ea7923e84c"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.877174 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://9b7878320974e3985f5732deb5170463e1dafc9265287376679a29ea7923e84c" gracePeriod=2 Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.936494 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.937746 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.938058 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.938080 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.210617 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.210672 4183 scope.go:117] "RemoveContainer" containerID="807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca" Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.288075 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/0.log" Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.288188 4183 generic.go:334] "Generic (PLEG): container finished" podID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" containerID="cde7b91dcd48d4e06df4d6dec59646da2d7b63ba4245f33286ad238c06706436" exitCode=1 Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.289403 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerDied","Data":"cde7b91dcd48d4e06df4d6dec59646da2d7b63ba4245f33286ad238c06706436"} Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.289888 4183 scope.go:117] "RemoveContainer" containerID="cde7b91dcd48d4e06df4d6dec59646da2d7b63ba4245f33286ad238c06706436" Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.939985 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:15 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:15 crc kubenswrapper[4183]: > Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.102098 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:16 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:16 crc kubenswrapper[4183]: > Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.109451 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:16 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:16 crc kubenswrapper[4183]: > Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.247089 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.300679 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="9b7878320974e3985f5732deb5170463e1dafc9265287376679a29ea7923e84c" exitCode=0 Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.300729 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"9b7878320974e3985f5732deb5170463e1dafc9265287376679a29ea7923e84c"} Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.301340 4183 scope.go:117] "RemoveContainer" containerID="74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24" Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.305561 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/2.log" Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.306619 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.307283 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021"} Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.317334 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"00e210723fa2ab3c15d1bb1e413bb28a867eb77be9c752bffa81f06d8a65f0ee"} Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.318439 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.318740 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.319123 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.321562 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/0.log" Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.321649 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a"} Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.332105 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/2.log" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.334088 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.334885 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b"} Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.335450 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.335487 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.335485 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.335605 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.336257 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.336333 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:04:19 crc kubenswrapper[4183]: I0813 20:04:19.211510 4183 scope.go:117] "RemoveContainer" containerID="b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71" Aug 13 20:04:19 crc kubenswrapper[4183]: I0813 20:04:19.539623 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:19 crc kubenswrapper[4183]: I0813 20:04:19.540660 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:19 crc kubenswrapper[4183]: I0813 20:04:19.658478 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:04:19 crc kubenswrapper[4183]: I0813 20:04:19.658588 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.377545 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/1.log" Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.666273 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.666350 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.667514 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.667578 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.847498 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:20 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:20 crc kubenswrapper[4183]: > Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.391098 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/1.log" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.391224 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e"} Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.394316 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.394375 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.394425 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.405955 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.424731 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.427524 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/2.log" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.428573 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.430940 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" exitCode=255 Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.431015 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b"} Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.431063 4183 scope.go:117] "RemoveContainer" containerID="807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.432643 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.432698 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:21 crc kubenswrapper[4183]: E0813 20:04:21.435988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.441900 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.444007 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.445109 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.449403 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" exitCode=255 Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.452444 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021"} Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.452721 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.454578 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.454626 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.455260 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.455951 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:22 crc kubenswrapper[4183]: E0813 20:04:22.455397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.677346 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/2.log" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.678661 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/1.log" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.678725 4183 generic.go:334] "Generic (PLEG): container finished" podID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" exitCode=1 Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.678937 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e"} Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.678987 4183 scope.go:117] "RemoveContainer" containerID="b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.679550 4183 scope.go:117] "RemoveContainer" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" Aug 13 20:04:23 crc kubenswrapper[4183]: E0813 20:04:23.680072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.684831 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.685747 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.522960 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.695084 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/2.log" Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.695994 4183 scope.go:117] "RemoveContainer" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" Aug 13 20:04:24 crc kubenswrapper[4183]: E0813 20:04:24.696619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.871956 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.872068 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.872273 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.872125 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.531477 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:25 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:25 crc kubenswrapper[4183]: > Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.665334 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.665530 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.666412 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.666474 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:25 crc kubenswrapper[4183]: E0813 20:04:25.667564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.707556 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.707921 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:25 crc kubenswrapper[4183]: E0813 20:04:25.717101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:26 crc kubenswrapper[4183]: I0813 20:04:26.082431 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:26 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:26 crc kubenswrapper[4183]: > Aug 13 20:04:26 crc kubenswrapper[4183]: I0813 20:04:26.102356 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:26 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:26 crc kubenswrapper[4183]: > Aug 13 20:04:29 crc kubenswrapper[4183]: I0813 20:04:29.540563 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:29 crc kubenswrapper[4183]: I0813 20:04:29.541077 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:30 crc kubenswrapper[4183]: I0813 20:04:30.809386 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:30 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:30 crc kubenswrapper[4183]: > Aug 13 20:04:34 crc kubenswrapper[4183]: I0813 20:04:34.872612 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:34 crc kubenswrapper[4183]: I0813 20:04:34.873160 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:34 crc kubenswrapper[4183]: I0813 20:04:34.873017 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:34 crc kubenswrapper[4183]: I0813 20:04:34.873257 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:35 crc kubenswrapper[4183]: I0813 20:04:35.523618 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:35 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:35 crc kubenswrapper[4183]: > Aug 13 20:04:36 crc kubenswrapper[4183]: I0813 20:04:36.055527 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:36 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:36 crc kubenswrapper[4183]: > Aug 13 20:04:36 crc kubenswrapper[4183]: I0813 20:04:36.067382 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:36 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:36 crc kubenswrapper[4183]: > Aug 13 20:04:36 crc kubenswrapper[4183]: I0813 20:04:36.209341 4183 scope.go:117] "RemoveContainer" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" Aug 13 20:04:36 crc kubenswrapper[4183]: E0813 20:04:36.209960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:36 crc kubenswrapper[4183]: I0813 20:04:36.941233 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="48128e8d38b5cbcd2691da698bd9cac3" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:04:38 crc kubenswrapper[4183]: I0813 20:04:38.803919 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="92b2a8634cfe8a21cffcc98cc8c87160" podUID="1f93bc40-081c-4dbc-905a-acda15a1c6ce" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.220261 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.220322 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:39 crc kubenswrapper[4183]: E0813 20:04:39.221136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.437995 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.540376 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.540474 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.662007 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.739757 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Aug 13 20:04:40 crc kubenswrapper[4183]: I0813 20:04:40.928980 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:40 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:40 crc kubenswrapper[4183]: > Aug 13 20:04:43 crc kubenswrapper[4183]: I0813 20:04:43.083757 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Aug 13 20:04:44 crc kubenswrapper[4183]: I0813 20:04:44.326702 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Aug 13 20:04:44 crc kubenswrapper[4183]: I0813 20:04:44.890538 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:04:45 crc kubenswrapper[4183]: I0813 20:04:45.404275 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:04:45 crc kubenswrapper[4183]: I0813 20:04:45.410685 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 20:04:45 crc kubenswrapper[4183]: I0813 20:04:45.533142 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:45 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:45 crc kubenswrapper[4183]: > Aug 13 20:04:45 crc kubenswrapper[4183]: I0813 20:04:45.549551 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 20:04:45 crc kubenswrapper[4183]: I0813 20:04:45.559224 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.210305 4183 scope.go:117] "RemoveContainer" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.777538 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.862868 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/2.log" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.862977 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077"} Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.863354 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.866328 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.866537 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.935187 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Aug 13 20:04:48 crc kubenswrapper[4183]: I0813 20:04:48.415454 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Aug 13 20:04:48 crc kubenswrapper[4183]: I0813 20:04:48.871874 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:48 crc kubenswrapper[4183]: I0813 20:04:48.872663 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:49 crc kubenswrapper[4183]: I0813 20:04:49.539935 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:49 crc kubenswrapper[4183]: I0813 20:04:49.540612 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:49 crc kubenswrapper[4183]: I0813 20:04:49.799903 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:04:49 crc kubenswrapper[4183]: I0813 20:04:49.943986 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.273701 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.900178 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/3.log" Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.907557 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/2.log" Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.907669 4183 generic.go:334] "Generic (PLEG): container finished" podID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" exitCode=1 Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.907705 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077"} Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.907743 4183 scope.go:117] "RemoveContainer" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.908626 4183 scope.go:117] "RemoveContainer" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Aug 13 20:04:50 crc kubenswrapper[4183]: E0813 20:04:50.909163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:51 crc kubenswrapper[4183]: I0813 20:04:51.210255 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:51 crc kubenswrapper[4183]: I0813 20:04:51.210305 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:51 crc kubenswrapper[4183]: I0813 20:04:51.212502 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Aug 13 20:04:51 crc kubenswrapper[4183]: I0813 20:04:51.868191 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Aug 13 20:04:51 crc kubenswrapper[4183]: I0813 20:04:51.917089 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/3.log" Aug 13 20:04:52 crc kubenswrapper[4183]: I0813 20:04:52.150279 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Aug 13 20:04:52 crc kubenswrapper[4183]: I0813 20:04:52.761529 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Aug 13 20:04:52 crc kubenswrapper[4183]: I0813 20:04:52.926570 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:52 crc kubenswrapper[4183]: I0813 20:04:52.928558 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:52 crc kubenswrapper[4183]: I0813 20:04:52.930835 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb"} Aug 13 20:04:53 crc kubenswrapper[4183]: I0813 20:04:53.243045 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.245119 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.494708 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7287f" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.522671 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.523584 4183 scope.go:117] "RemoveContainer" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Aug 13 20:04:54 crc kubenswrapper[4183]: E0813 20:04:54.524261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.626589 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7287f" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.714562 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.714725 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.714823 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.714889 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" status="Running" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.996764 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/1.log" Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.007074 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/0.log" Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.007175 4183 generic.go:334] "Generic (PLEG): container finished" podID="7d51f445-054a-4e4f-a67b-a828f5a32511" containerID="5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b" exitCode=1 Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.007251 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerDied","Data":"5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b"} Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.007368 4183 scope.go:117] "RemoveContainer" containerID="957c48a64bf505f55933cfc9cf99bce461d72f89938aa38299be4b2e4c832fb2" Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.008069 4183 scope.go:117] "RemoveContainer" containerID="5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b" Aug 13 20:04:55 crc kubenswrapper[4183]: E0813 20:04:55.008829 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-7d46d5bb6d-rrg6t_openshift-ingress-operator(7d51f445-054a-4e4f-a67b-a828f5a32511)\"" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.904963 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.019162 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.020146 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.021084 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9"} Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.024920 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/1.log" Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.452492 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.474971 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Aug 13 20:04:57 crc kubenswrapper[4183]: I0813 20:04:57.089106 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Aug 13 20:04:57 crc kubenswrapper[4183]: I0813 20:04:57.629887 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Aug 13 20:04:57 crc kubenswrapper[4183]: I0813 20:04:57.789896 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Aug 13 20:04:58 crc kubenswrapper[4183]: I0813 20:04:58.152330 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Aug 13 20:04:58 crc kubenswrapper[4183]: I0813 20:04:58.472077 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Aug 13 20:04:58 crc kubenswrapper[4183]: I0813 20:04:58.562995 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Aug 13 20:04:58 crc kubenswrapper[4183]: I0813 20:04:58.675559 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Aug 13 20:04:58 crc kubenswrapper[4183]: I0813 20:04:58.893419 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.073153 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.075333 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.076138 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.077032 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" exitCode=255 Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.077097 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9"} Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.077146 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.078341 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:04:59 crc kubenswrapper[4183]: E0813 20:04:59.078943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.135243 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.541093 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.542262 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.886707 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.090156 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.093150 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.094540 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.095262 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" exitCode=255 Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.095305 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb"} Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.095764 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.096302 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.096440 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:00 crc kubenswrapper[4183]: E0813 20:05:00.097254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.114000 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.665449 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.666145 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.668984 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.817164 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.860638 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.880066 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.922569 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.004185 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.104914 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.106219 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.110562 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.110684 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:01 crc kubenswrapper[4183]: E0813 20:05:01.114138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.669639 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.802689 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.997359 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.075704 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.114082 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.114415 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:02 crc kubenswrapper[4183]: E0813 20:05:02.115311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.270366 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.361686 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.462052 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.876429 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Aug 13 20:05:03 crc kubenswrapper[4183]: I0813 20:05:03.121445 4183 generic.go:334] "Generic (PLEG): container finished" podID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerID="3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6" exitCode=0 Aug 13 20:05:03 crc kubenswrapper[4183]: I0813 20:05:03.121510 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerDied","Data":"3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6"} Aug 13 20:05:03 crc kubenswrapper[4183]: I0813 20:05:03.534136 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Aug 13 20:05:03 crc kubenswrapper[4183]: I0813 20:05:03.821185 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Aug 13 20:05:04 crc kubenswrapper[4183]: I0813 20:05:04.024845 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Aug 13 20:05:04 crc kubenswrapper[4183]: I0813 20:05:04.357290 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Aug 13 20:05:04 crc kubenswrapper[4183]: I0813 20:05:04.467645 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Aug 13 20:05:04 crc kubenswrapper[4183]: I0813 20:05:04.598329 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Aug 13 20:05:05 crc kubenswrapper[4183]: I0813 20:05:05.140521 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerStarted","Data":"936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636"} Aug 13 20:05:05 crc kubenswrapper[4183]: I0813 20:05:05.415288 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Aug 13 20:05:05 crc kubenswrapper[4183]: I0813 20:05:05.666656 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:05 crc kubenswrapper[4183]: I0813 20:05:05.667611 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:05 crc kubenswrapper[4183]: I0813 20:05:05.667649 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:05 crc kubenswrapper[4183]: E0813 20:05:05.668446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.189768 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.210115 4183 scope.go:117] "RemoveContainer" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Aug 13 20:05:06 crc kubenswrapper[4183]: E0813 20:05:06.210718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.251707 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.252974 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.298212 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.311324 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.543153 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.788729 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Aug 13 20:05:07 crc kubenswrapper[4183]: I0813 20:05:07.130607 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Aug 13 20:05:07 crc kubenswrapper[4183]: I0813 20:05:07.210910 4183 scope.go:117] "RemoveContainer" containerID="5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b" Aug 13 20:05:07 crc kubenswrapper[4183]: I0813 20:05:07.426231 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Aug 13 20:05:07 crc kubenswrapper[4183]: I0813 20:05:07.680896 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Aug 13 20:05:07 crc kubenswrapper[4183]: I0813 20:05:07.833891 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.170083 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/1.log" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.170396 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.172414 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44"} Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.376013 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.627849 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.740880 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.759596 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.778671 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.182649 4183 generic.go:334] "Generic (PLEG): container finished" podID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerID="be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24" exitCode=0 Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.182831 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerDied","Data":"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24"} Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.369612 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.540462 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.540555 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.816105 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.072842 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.317996 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/0.log" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.322904 4183 generic.go:334] "Generic (PLEG): container finished" podID="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" containerID="9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4" exitCode=255 Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.322974 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerDied","Data":"9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4"} Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.324271 4183 scope.go:117] "RemoveContainer" containerID="9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.500315 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.650605 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.861252 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.112401 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.336302 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerStarted","Data":"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2"} Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.339472 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/0.log" Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.340602 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"b6fafe7cac89983f8701bc5ed1df09e2b82c358b3a757377ca15de6546b5eb9f"} Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.411131 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.707689 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.739312 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Aug 13 20:05:12 crc kubenswrapper[4183]: I0813 20:05:12.205833 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Aug 13 20:05:12 crc kubenswrapper[4183]: I0813 20:05:12.599179 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Aug 13 20:05:12 crc kubenswrapper[4183]: I0813 20:05:12.955315 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.078966 4183 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.098878 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.112587 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=172.083720651 podStartE2EDuration="2m52.083720651s" podCreationTimestamp="2025-08-13 20:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:04:37.37903193 +0000 UTC m=+1244.071696868" watchObservedRunningTime="2025-08-13 20:05:13.083720651 +0000 UTC m=+1279.776385389" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.116733 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g4v97" podStartSLOduration=35619880.42286533 podStartE2EDuration="9894h30m55.116660334s" podCreationTimestamp="2024-06-27 13:34:18 +0000 UTC" firstStartedPulling="2025-08-13 19:57:52.840933971 +0000 UTC m=+839.533598689" lastFinishedPulling="2025-08-13 20:04:07.534728981 +0000 UTC m=+1214.227393689" observedRunningTime="2025-08-13 20:04:38.881376951 +0000 UTC m=+1245.574041929" watchObservedRunningTime="2025-08-13 20:05:13.116660334 +0000 UTC m=+1279.809325042" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.117062 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rmwfn" podStartSLOduration=35620009.78697888 podStartE2EDuration="9894h31m39.117029724s" podCreationTimestamp="2024-06-27 13:33:34 +0000 UTC" firstStartedPulling="2025-08-13 19:59:18.068965491 +0000 UTC m=+924.761630139" lastFinishedPulling="2025-08-13 20:04:07.399016379 +0000 UTC m=+1214.091680987" observedRunningTime="2025-08-13 20:04:39.012673861 +0000 UTC m=+1245.705338829" watchObservedRunningTime="2025-08-13 20:05:13.117029724 +0000 UTC m=+1279.809694442" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.208428 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh","openshift-controller-manager/controller-manager-78589965b8-vmcwt","openshift-image-registry/image-registry-7cbd5666ff-bbfrf","openshift-console/console-84fccc7b6-mkncc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.209287 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.209340 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.209479 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.209510 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.224634 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00d32440-4cce-4609-96f3-51ac94480aab" path="/var/lib/kubelet/pods/00d32440-4cce-4609-96f3-51ac94480aab/volumes" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.226609 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" path="/var/lib/kubelet/pods/42b6a393-6194-4620-bf8f-7e4b6cbe5679/volumes" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.229290 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" path="/var/lib/kubelet/pods/b233d916-bfe3-4ae5-ae39-6b574d1aa05e/volumes" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.231822 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" path="/var/lib/kubelet/pods/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d/volumes" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.233054 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx","openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.237345 4183 topology_manager.go:215] "Topology Admit Handler" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" podNamespace="openshift-controller-manager" podName="controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.249551 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.250646 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.250739 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.250754 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.250970 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.250988 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.251000 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.251008 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.251030 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.251037 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.251050 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.251060 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.251074 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="79050916-d488-4806-b556-1b0078b31e53" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.251082 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="79050916-d488-4806-b556-1b0078b31e53" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252436 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252897 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252925 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252938 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252952 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252966 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252982 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252995 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="79050916-d488-4806-b556-1b0078b31e53" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.267733 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.269541 4183 topology_manager.go:215] "Topology Admit Handler" podUID="becc7e17-2bc7-417d-832f-55127299d70f" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.269755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.272943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.276321 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.282374 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.282731 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.289509 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292292 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292390 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292465 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292493 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292496 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292912 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292984 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.293303 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.293451 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.307677 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.394716 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.408564 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvfwr\" (UniqueName: \"kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410401 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410445 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410484 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spb98\" (UniqueName: \"kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410552 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410603 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410646 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410715 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410887 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.462438 4183 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512368 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512461 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512498 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512528 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512562 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nvfwr\" (UniqueName: \"kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512598 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512622 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-spb98\" (UniqueName: \"kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512684 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.648609 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.648683 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.648763 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.649909 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.651487 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.655027 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.676275 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.677413 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.954091 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvfwr\" (UniqueName: \"kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.958326 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-spb98\" (UniqueName: \"kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.023275 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=59.023213394 podStartE2EDuration="59.023213394s" podCreationTimestamp="2025-08-13 20:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:14.020333212 +0000 UTC m=+1280.712998070" watchObservedRunningTime="2025-08-13 20:05:14.023213394 +0000 UTC m=+1280.715878202" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.066177 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k9qqb" podStartSLOduration=35619820.18712965 podStartE2EDuration="9894h30m58.066128853s" podCreationTimestamp="2024-06-27 13:34:16 +0000 UTC" firstStartedPulling="2025-08-13 19:57:51.83654203 +0000 UTC m=+838.529206798" lastFinishedPulling="2025-08-13 20:05:09.715541279 +0000 UTC m=+1276.408206007" observedRunningTime="2025-08-13 20:05:14.064306021 +0000 UTC m=+1280.756970859" watchObservedRunningTime="2025-08-13 20:05:14.066128853 +0000 UTC m=+1280.758793581" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.128077 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.204184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.205979 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=59.205874035 podStartE2EDuration="59.205874035s" podCreationTimestamp="2025-08-13 20:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:14.19801498 +0000 UTC m=+1280.890679758" watchObservedRunningTime="2025-08-13 20:05:14.205874035 +0000 UTC m=+1280.898539443" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.214829 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.222339 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.255305 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.565414 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.565913 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.669956 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.855193 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.152712 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.309951 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.628243 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.658057 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.686472 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:15 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:15 crc kubenswrapper[4183]: > Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.781369 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Aug 13 20:05:16 crc kubenswrapper[4183]: I0813 20:05:16.344985 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Aug 13 20:05:16 crc kubenswrapper[4183]: I0813 20:05:16.485318 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Aug 13 20:05:16 crc kubenswrapper[4183]: I0813 20:05:16.513489 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Aug 13 20:05:16 crc kubenswrapper[4183]: I0813 20:05:16.789608 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.146002 4183 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-598fc85fd4-8wlsm_openshift-controller-manager_8b8d1c48-5762-450f-bd4d-9134869f432b_0(ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626): error adding pod openshift-controller-manager_controller-manager-598fc85fd4-8wlsm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm] networking: Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod "controller-manager-598fc85fd4-8wlsm" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.146600 4183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-598fc85fd4-8wlsm_openshift-controller-manager_8b8d1c48-5762-450f-bd4d-9134869f432b_0(ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626): error adding pod openshift-controller-manager_controller-manager-598fc85fd4-8wlsm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm] networking: Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod "controller-manager-598fc85fd4-8wlsm" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.146629 4183 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-598fc85fd4-8wlsm_openshift-controller-manager_8b8d1c48-5762-450f-bd4d-9134869f432b_0(ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626): error adding pod openshift-controller-manager_controller-manager-598fc85fd4-8wlsm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm] networking: Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod "controller-manager-598fc85fd4-8wlsm" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.146742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-598fc85fd4-8wlsm_openshift-controller-manager(8b8d1c48-5762-450f-bd4d-9134869f432b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-598fc85fd4-8wlsm_openshift-controller-manager(8b8d1c48-5762-450f-bd4d-9134869f432b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-598fc85fd4-8wlsm_openshift-controller-manager_8b8d1c48-5762-450f-bd4d-9134869f432b_0(ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626): error adding pod openshift-controller-manager_controller-manager-598fc85fd4-8wlsm to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626\\\" Netns:\\\"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm] networking: Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod \\\"controller-manager-598fc85fd4-8wlsm\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.185604 4183 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager_becc7e17-2bc7-417d-832f-55127299d70f_0(d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23): error adding pod openshift-route-controller-manager_route-controller-manager-6884dcf749-n4qpx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod "route-controller-manager-6884dcf749-n4qpx" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.185687 4183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager_becc7e17-2bc7-417d-832f-55127299d70f_0(d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23): error adding pod openshift-route-controller-manager_route-controller-manager-6884dcf749-n4qpx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod "route-controller-manager-6884dcf749-n4qpx" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.185746 4183 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager_becc7e17-2bc7-417d-832f-55127299d70f_0(d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23): error adding pod openshift-route-controller-manager_route-controller-manager-6884dcf749-n4qpx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod "route-controller-manager-6884dcf749-n4qpx" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.186516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager(becc7e17-2bc7-417d-832f-55127299d70f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager(becc7e17-2bc7-417d-832f-55127299d70f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager_becc7e17-2bc7-417d-832f-55127299d70f_0(d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23): error adding pod openshift-route-controller-manager_route-controller-manager-6884dcf749-n4qpx to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23\\\" Netns:\\\"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod \\\"route-controller-manager-6884dcf749-n4qpx\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" podUID="becc7e17-2bc7-417d-832f-55127299d70f" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.209062 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.209095 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.209766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.297640 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.302574 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.381660 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.509832 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.625271 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.792176 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.175892 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.243339 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.321978 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.494179 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.497100 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/0.log" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.497201 4183 generic.go:334] "Generic (PLEG): container finished" podID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" containerID="0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a" exitCode=255 Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.497239 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerDied","Data":"0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a"} Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.497284 4183 scope.go:117] "RemoveContainer" containerID="cde7b91dcd48d4e06df4d6dec59646da2d7b63ba4245f33286ad238c06706436" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.498290 4183 scope.go:117] "RemoveContainer" containerID="0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a" Aug 13 20:05:18 crc kubenswrapper[4183]: E0813 20:05:18.499112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"control-plane-machine-set-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=control-plane-machine-set-operator pod=control-plane-machine-set-operator-649bd778b4-tt5tw_openshift-machine-api(45a8038e-e7f2-4d93-a6f5-7753aa54e63f)\"" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.666389 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.818229 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.818437 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.875753 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.977189 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.995738 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.996007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:18.996970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.517497 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.540079 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.540285 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.540403 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.545389 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="console" containerStatusID={"Type":"cri-o","ID":"bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba"} pod="openshift-console/console-5d9678894c-wx62n" containerMessage="Container console failed startup probe, will be restarted" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.589297 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.700554 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx"] Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.700751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.709757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.977120 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:19 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:19 crc kubenswrapper[4183]: > Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.011674 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.084552 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6sd5l" Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.210537 4183 scope.go:117] "RemoveContainer" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Aug 13 20:05:20 crc kubenswrapper[4183]: E0813 20:05:20.211602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.219236 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.743720 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.867244 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Aug 13 20:05:21 crc kubenswrapper[4183]: I0813 20:05:21.066612 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Aug 13 20:05:21 crc kubenswrapper[4183]: I0813 20:05:21.505896 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Aug 13 20:05:21 crc kubenswrapper[4183]: I0813 20:05:21.552288 4183 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:05:21 crc kubenswrapper[4183]: I0813 20:05:21.669562 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.088839 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.293069 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.369896 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.609190 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.715427 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx"] Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.789590 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.111893 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.279471 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.553213 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" event={"ID":"8b8d1c48-5762-450f-bd4d-9134869f432b","Type":"ContainerStarted","Data":"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8"} Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.553762 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.554111 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" event={"ID":"8b8d1c48-5762-450f-bd4d-9134869f432b","Type":"ContainerStarted","Data":"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb"} Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.556456 4183 patch_prober.go:28] interesting pod/controller-manager-598fc85fd4-8wlsm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.556537 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.557599 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" event={"ID":"becc7e17-2bc7-417d-832f-55127299d70f","Type":"ContainerStarted","Data":"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75"} Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.557658 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" event={"ID":"becc7e17-2bc7-417d-832f-55127299d70f","Type":"ContainerStarted","Data":"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7"} Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.558583 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.560568 4183 patch_prober.go:28] interesting pod/route-controller-manager-6884dcf749-n4qpx container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.75:8443/healthz\": dial tcp 10.217.0.75:8443: connect: connection refused" start-of-body= Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.560953 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" podUID="becc7e17-2bc7-417d-832f-55127299d70f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.75:8443/healthz\": dial tcp 10.217.0.75:8443: connect: connection refused" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.636023 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podStartSLOduration=242.635956854 podStartE2EDuration="4m2.635956854s" podCreationTimestamp="2025-08-13 20:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:23.62989526 +0000 UTC m=+1290.322560408" watchObservedRunningTime="2025-08-13 20:05:23.635956854 +0000 UTC m=+1290.328621982" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.706151 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.827966 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.949042 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.086654 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-ng44q" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.125475 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.191367 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.205474 4183 patch_prober.go:28] interesting pod/controller-manager-598fc85fd4-8wlsm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.205611 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.365075 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.567394 4183 patch_prober.go:28] interesting pod/controller-manager-598fc85fd4-8wlsm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.567502 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.815329 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.826046 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.927063 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" podStartSLOduration=241.926998625 podStartE2EDuration="4m1.926998625s" podCreationTimestamp="2025-08-13 20:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:23.71475329 +0000 UTC m=+1290.407418348" watchObservedRunningTime="2025-08-13 20:05:24.926998625 +0000 UTC m=+1291.619663633" Aug 13 20:05:25 crc kubenswrapper[4183]: E0813 20:05:25.203459 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107" Aug 13 20:05:25 crc kubenswrapper[4183]: E0813 20:05:25.207311 4183 kuberuntime_manager.go:1262] container &Container{Name:console,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae,Command:[/opt/bridge/bin/bridge --public-dir=/opt/bridge/static --config=/var/console-config/console-config.yaml --service-ca-file=/var/service-ca/service-ca.crt --v=2],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{104857600 0} {} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:console-serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:console-oauth-config,ReadOnly:true,MountPath:/var/oauth-config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:console-config,ReadOnly:true,MountPath:/var/console-config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:service-ca,ReadOnly:true,MountPath:/var/service-ca,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:oauth-serving-cert,ReadOnly:true,MountPath:/var/oauth-serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2nz92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:1,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[sleep 25],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000590000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:30,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod console-644bb77b49-5x5xk_openshift-console(9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1): CreateContainerError: context deadline exceeded Aug 13 20:05:25 crc kubenswrapper[4183]: E0813 20:05:25.207440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Aug 13 20:05:25 crc kubenswrapper[4183]: I0813 20:05:25.770618 4183 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:05:25 crc kubenswrapper[4183]: I0813 20:05:25.843280 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Aug 13 20:05:25 crc kubenswrapper[4183]: I0813 20:05:25.898295 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.203430 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:26 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:26 crc kubenswrapper[4183]: > Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.342830 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.352289 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Aug 13 20:05:26 crc kubenswrapper[4183]: E0813 20:05:26.531826 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61" Aug 13 20:05:26 crc kubenswrapper[4183]: E0813 20:05:26.532359 4183 kuberuntime_manager.go:1262] container &Container{Name:kube-scheduler-operator-container,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f,Command:[cluster-kube-scheduler-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.29.5,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_openshift-kube-scheduler-operator(71af81a9-7d43-49b2-9287-c375900aa905): CreateContainerError: context deadline exceeded Aug 13 20:05:26 crc kubenswrapper[4183]: E0813 20:05:26.532539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.533765 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.583286 4183 generic.go:334] "Generic (PLEG): container finished" podID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerID="319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f" exitCode=0 Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.583384 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerDied","Data":"319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f"} Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.588158 4183 generic.go:334] "Generic (PLEG): container finished" podID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerID="5dfab3908e38ec4c78ee676439e402432e22c1d28963eb816627f094e1f7ffed" exitCode=0 Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.588850 4183 scope.go:117] "RemoveContainer" containerID="e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.589271 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerDied","Data":"5dfab3908e38ec4c78ee676439e402432e22c1d28963eb816627f094e1f7ffed"} Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.655378 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.734553 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.770986 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.829223 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.840965 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.850381 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.912068 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Aug 13 20:05:27 crc kubenswrapper[4183]: I0813 20:05:27.416399 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Aug 13 20:05:27 crc kubenswrapper[4183]: E0813 20:05:27.518744 4183 handlers.go:79] "Exec lifecycle hook for Container in Pod failed" err="command 'sleep 25' exited with 137: " execCommand=["sleep","25"] containerName="console" pod="openshift-console/console-5d9678894c-wx62n" message="" Aug 13 20:05:27 crc kubenswrapper[4183]: E0813 20:05:27.519483 4183 kuberuntime_container.go:653] "PreStop hook failed" err="command 'sleep 25' exited with 137: " pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" containerID="cri-o://bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba" Aug 13 20:05:27 crc kubenswrapper[4183]: I0813 20:05:27.519589 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" containerID="cri-o://bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba" gracePeriod=33 Aug 13 20:05:27 crc kubenswrapper[4183]: I0813 20:05:27.588263 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Aug 13 20:05:27 crc kubenswrapper[4183]: I0813 20:05:27.601125 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-644bb77b49-5x5xk" event={"ID":"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1","Type":"ContainerStarted","Data":"d329928035eabc24218bf53782983e5317173e1aceaf58f4d858919ca11603ad"} Aug 13 20:05:27 crc kubenswrapper[4183]: I0813 20:05:27.732427 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.175705 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.615064 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerStarted","Data":"aef36bd2553b9941561332862e00ec117b296eb1e04d6191f7d1a0e272134312"} Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.621703 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/0.log" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.621932 4183 generic.go:334] "Generic (PLEG): container finished" podID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerID="bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba" exitCode=255 Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.622022 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerDied","Data":"bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba"} Aug 13 20:05:28 crc kubenswrapper[4183]: E0813 20:05:28.628458 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239" Aug 13 20:05:28 crc kubenswrapper[4183]: E0813 20:05:28.628643 4183 kuberuntime_manager.go:1262] container &Container{Name:cluster-image-registry-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d,Command:[],Args:[--files=/var/run/configmaps/trusted-ca/tls-ca-bundle.pem --files=/etc/secrets/tls.crt --files=/etc/secrets/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:60000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:cluster-image-registry-operator,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8,ValueFrom:nil,},EnvVar{Name:IMAGE_PRUNER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce,ValueFrom:nil,},EnvVar{Name:AZURE_ENVIRONMENT_FILEPATH,Value:/tmp/azurestackcloud.json,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:trusted-ca,ReadOnly:false,MountPath:/var/run/configmaps/trusted-ca/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:image-registry-operator-tls,ReadOnly:false,MountPath:/etc/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9x6dp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000290000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cluster-image-registry-operator-7769bd8d7d-q5cvv_openshift-image-registry(b54e8941-2fc4-432a-9e51-39684df9089e): CreateContainerError: context deadline exceeded Aug 13 20:05:28 crc kubenswrapper[4183]: E0813 20:05:28.628687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-image-registry-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.632001 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerStarted","Data":"bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8"} Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.640740 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerStarted","Data":"a39a002d95a82ae963b46c8196dfed935c199e471be64946be7406b3b02562c9"} Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.744903 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.782051 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-console/console-644bb77b49-5x5xk" podStartSLOduration=258.782001936 podStartE2EDuration="4m18.782001936s" podCreationTimestamp="2025-08-13 20:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:28.78074536 +0000 UTC m=+1295.473410118" watchObservedRunningTime="2025-08-13 20:05:28.782001936 +0000 UTC m=+1295.474666664" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.844642 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.059601 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.060691 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="bf055e84f32193b9c1c21b0c34a61f01" containerName="startup-monitor" containerID="cri-o://15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268" gracePeriod=5 Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.563129 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.647320 4183 scope.go:117] "RemoveContainer" containerID="dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540" Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.648997 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.649295 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.974239 4183 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.211475 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.211526 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:30 crc kubenswrapper[4183]: E0813 20:05:30.212347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.226111 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.269216 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dcqzh" podStartSLOduration=35619822.16397022 podStartE2EDuration="9894h31m16.269154122s" podCreationTimestamp="2024-06-27 13:34:14 +0000 UTC" firstStartedPulling="2025-08-13 19:57:52.841939639 +0000 UTC m=+839.534604367" lastFinishedPulling="2025-08-13 20:05:26.947123582 +0000 UTC m=+1293.639788270" observedRunningTime="2025-08-13 20:05:30.047038901 +0000 UTC m=+1296.739703649" watchObservedRunningTime="2025-08-13 20:05:30.269154122 +0000 UTC m=+1296.961818970" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.469599 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.469728 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.475036 4183 patch_prober.go:28] interesting pod/console-644bb77b49-5x5xk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" start-of-body= Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.475118 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.654393 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:30 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:30 crc kubenswrapper[4183]: > Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.657994 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/0.log" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.658370 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerStarted","Data":"1ce82b64b98820f650cc613d542e0f0643d32ba3d661ee198711362ba0c99f8b"} Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.188512 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.227737 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.434834 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.543125 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:31 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:31 crc kubenswrapper[4183]: > Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.663391 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.670843 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerStarted","Data":"8c343d7ff4e8fd8830942fe00e0e9953854c7d57807d54ef2fb25d9d7bd48b55"} Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.713016 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Aug 13 20:05:32 crc kubenswrapper[4183]: I0813 20:05:32.209841 4183 scope.go:117] "RemoveContainer" containerID="0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a" Aug 13 20:05:32 crc kubenswrapper[4183]: I0813 20:05:32.209982 4183 scope.go:117] "RemoveContainer" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Aug 13 20:05:32 crc kubenswrapper[4183]: I0813 20:05:32.802208 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Aug 13 20:05:32 crc kubenswrapper[4183]: I0813 20:05:32.847086 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.158289 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71" Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.159038 4183 kuberuntime_manager.go:1262] container &Container{Name:openshift-config-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc,Command:[cluster-config-operator operator --operator-version=$(OPERATOR_IMAGE_VERSION) --authoritative-feature-gate-dir=/available-featuregates],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:available-featuregates,ReadOnly:false,MountPath:/available-featuregates,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8dcvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-config-operator-77658b5b66-dq5sc_openshift-config-operator(530553aa-0a1d-423e-8a22-f5eb4bdbb883): CreateContainerError: context deadline exceeded Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.159218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.172930 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d" Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.173636 4183 kuberuntime_manager.go:1262] container &Container{Name:kube-controller-manager-operator,Image:quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f,Command:[cluster-kube-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f,ValueFrom:nil,},EnvVar{Name:CLUSTER_POLICY_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791,ValueFrom:nil,},EnvVar{Name:TOOLS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d6201c776053346ebce8f90c34797a7a7c05898008e17f3ba9673f5f14507b0,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.29.5,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-operator-6f6cb54958-rbddb_openshift-kube-controller-manager-operator(c1620f19-8aa3-45cf-931b-7ae0e5cd14cf): CreateContainerError: context deadline exceeded Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.173894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.442259 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.701413 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.713829 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/3.log" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.714252 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9"} Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.714456 4183 scope.go:117] "RemoveContainer" containerID="de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.718388 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.720037 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.720403 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.722308 4183 scope.go:117] "RemoveContainer" containerID="a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.869762 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.166226 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.212181 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.249053 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.312330 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.445945 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_bf055e84f32193b9c1c21b0c34a61f01/startup-monitor/0.log" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.446088 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.526706 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.526756 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.526920 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.527030 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.620106 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock\") pod \"bf055e84f32193b9c1c21b0c34a61f01\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.620218 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir\") pod \"bf055e84f32193b9c1c21b0c34a61f01\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.620246 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests\") pod \"bf055e84f32193b9c1c21b0c34a61f01\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.620339 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir\") pod \"bf055e84f32193b9c1c21b0c34a61f01\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.620378 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log\") pod \"bf055e84f32193b9c1c21b0c34a61f01\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.623328 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log" (OuterVolumeSpecName: "var-log") pod "bf055e84f32193b9c1c21b0c34a61f01" (UID: "bf055e84f32193b9c1c21b0c34a61f01"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.623312 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests" (OuterVolumeSpecName: "manifests") pod "bf055e84f32193b9c1c21b0c34a61f01" (UID: "bf055e84f32193b9c1c21b0c34a61f01"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.623479 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "bf055e84f32193b9c1c21b0c34a61f01" (UID: "bf055e84f32193b9c1c21b0c34a61f01"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.623721 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock" (OuterVolumeSpecName: "var-lock") pod "bf055e84f32193b9c1c21b0c34a61f01" (UID: "bf055e84f32193b9c1c21b0c34a61f01"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.658206 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "bf055e84f32193b9c1c21b0c34a61f01" (UID: "bf055e84f32193b9c1c21b0c34a61f01"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.702693 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.703227 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.722528 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.722593 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.722607 4183 reconciler_common.go:300] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests\") on node \"crc\" DevicePath \"\"" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.722622 4183 reconciler_common.go:300] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.722636 4183 reconciler_common.go:300] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log\") on node \"crc\" DevicePath \"\"" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.742655 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.743210 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"6e2b2ebcbabf5c1d8517ce153f68731713702ba7ac48dbbb35aa2337043be534"} Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.749146 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/0.log" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.760219 4183 generic.go:334] "Generic (PLEG): container finished" podID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" containerID="de6ce3128562801aa3c24e80d49667d2d193ade88fcdf509085e61d3d048041e" exitCode=255 Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.760314 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerDied","Data":"de6ce3128562801aa3c24e80d49667d2d193ade88fcdf509085e61d3d048041e"} Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.760945 4183 scope.go:117] "RemoveContainer" containerID="de6ce3128562801aa3c24e80d49667d2d193ade88fcdf509085e61d3d048041e" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.780158 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"95ea01f530cb8f9c84220be232e511a271a9480b103ab0095af603077e0cb252"} Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.781288 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.787186 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_bf055e84f32193b9c1c21b0c34a61f01/startup-monitor/0.log" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.787250 4183 generic.go:334] "Generic (PLEG): container finished" podID="bf055e84f32193b9c1c21b0c34a61f01" containerID="15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268" exitCode=137 Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.788154 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.788564 4183 scope.go:117] "RemoveContainer" containerID="15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.788989 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.789131 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.951503 4183 scope.go:117] "RemoveContainer" containerID="15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268" Aug 13 20:05:34 crc kubenswrapper[4183]: E0813 20:05:34.952199 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268\": container with ID starting with 15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268 not found: ID does not exist" containerID="15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.952261 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268"} err="failed to get container status \"15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268\": rpc error: code = NotFound desc = could not find container \"15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268\": container with ID starting with 15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268 not found: ID does not exist" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.225693 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf055e84f32193b9c1c21b0c34a61f01" path="/var/lib/kubelet/pods/bf055e84f32193b9c1c21b0c34a61f01/volumes" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.229141 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.232216 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.311740 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-79vsd" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.321850 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.321937 4183 kubelet.go:2639] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="423c3b23-c4c1-4055-868d-65e7387f40ce" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.341507 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.341580 4183 kubelet.go:2663] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="423c3b23-c4c1-4055-868d-65e7387f40ce" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.386306 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.800662 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerStarted","Data":"a91ec548a60f506a0a73fce12c0a6b3a787ccba29077a1f7d43da8a738f473d2"} Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.031690 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:36 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:36 crc kubenswrapper[4183]: > Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.140880 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.301833 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:36 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:36 crc kubenswrapper[4183]: > Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.511890 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.812216 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/0.log" Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.812973 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"4dd7298bc15ad94ac15b2586221cba0590f58e6667404ba80b077dc597db4950"} Aug 13 20:05:37 crc kubenswrapper[4183]: E0813 20:05:37.200104 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = kubelet may be retrying requests that are timing out in CRI-O due to system load. Currently at stage container storage creation: the requested container k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 is now ready and will be provided to the kubelet on next retry: error reserving ctr name k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 for id 5311a227522754649347ee221cf50be9f546f8a870582594bc726558a6fab7f5: name is reserved" podSandboxID="489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5" Aug 13 20:05:37 crc kubenswrapper[4183]: E0813 20:05:37.200320 4183 kuberuntime_manager.go:1262] container &Container{Name:openshift-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611,Command:[cluster-openshift-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:ROUTE_CONTROLLER_MANAGER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l8bxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator(0f394926-bdb9-425c-b36e-264d7fd34550): CreateContainerError: kubelet may be retrying requests that are timing out in CRI-O due to system load. Currently at stage container storage creation: the requested container k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 is now ready and will be provided to the kubelet on next retry: error reserving ctr name k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 for id 5311a227522754649347ee221cf50be9f546f8a870582594bc726558a6fab7f5: name is reserved Aug 13 20:05:37 crc kubenswrapper[4183]: E0813 20:05:37.200385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CreateContainerError: \"kubelet may be retrying requests that are timing out in CRI-O due to system load. Currently at stage container storage creation: the requested container k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 is now ready and will be provided to the kubelet on next retry: error reserving ctr name k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 for id 5311a227522754649347ee221cf50be9f546f8a870582594bc726558a6fab7f5: name is reserved\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 20:05:37 crc kubenswrapper[4183]: I0813 20:05:37.344231 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:05:37 crc kubenswrapper[4183]: I0813 20:05:37.464262 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Aug 13 20:05:37 crc kubenswrapper[4183]: I0813 20:05:37.819730 4183 scope.go:117] "RemoveContainer" containerID="30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d" Aug 13 20:05:37 crc kubenswrapper[4183]: I0813 20:05:37.905756 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Aug 13 20:05:38 crc kubenswrapper[4183]: I0813 20:05:38.438414 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Aug 13 20:05:38 crc kubenswrapper[4183]: I0813 20:05:38.835543 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/0.log" Aug 13 20:05:38 crc kubenswrapper[4183]: I0813 20:05:38.836025 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerStarted","Data":"18768e4e615786eedd49b25431da2fe5b5aaf29e37914eddd9e94881eac5e8c1"} Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.019126 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.153324 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.188592 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.261904 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.538769 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.538986 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.550611 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.671238 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.854671 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.093265 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.161234 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:40 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:40 crc kubenswrapper[4183]: > Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.347047 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.397675 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.468081 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.475820 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.483262 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.708985 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.830628 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:40 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:40 crc kubenswrapper[4183]: > Aug 13 20:05:41 crc kubenswrapper[4183]: I0813 20:05:41.179381 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Aug 13 20:05:41 crc kubenswrapper[4183]: E0813 20:05:41.226057 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4" Aug 13 20:05:41 crc kubenswrapper[4183]: E0813 20:05:41.226360 4183 kuberuntime_manager.go:1262] container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,Command:[/bin/bash -c #!/bin/bash Aug 13 20:05:41 crc kubenswrapper[4183]: set -o allexport Aug 13 20:05:41 crc kubenswrapper[4183]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Aug 13 20:05:41 crc kubenswrapper[4183]: source /etc/kubernetes/apiserver-url.env Aug 13 20:05:41 crc kubenswrapper[4183]: else Aug 13 20:05:41 crc kubenswrapper[4183]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Aug 13 20:05:41 crc kubenswrapper[4183]: exit 1 Aug 13 20:05:41 crc kubenswrapper[4183]: fi Aug 13 20:05:41 crc kubenswrapper[4183]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Aug 13 20:05:41 crc kubenswrapper[4183]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:SDN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ec002699d6fa111b93b08bda974586ae4018f4a52d1cbfd0995e6dc9c732151,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce3a9355a4497b51899867170943d34bbc2d2b7996d9a002c103797bd828d71b,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0791454224e2ec76fd43916220bd5ae55bf18f37f0cd571cb05c76e1d791453,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc5f4b6565d37bd875cdb42e95372128231218fb8741f640b09565d9dcea2cb1,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4sfhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-767c585db5-zd56b_openshift-network-operator(cc291782-27d2-4a74-af79-c7dcb31535d2): CreateContainerError: context deadline exceeded Aug 13 20:05:41 crc kubenswrapper[4183]: E0813 20:05:41.226433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-network-operator/network-operator-767c585db5-zd56b" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" Aug 13 20:05:41 crc kubenswrapper[4183]: I0813 20:05:41.666475 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Aug 13 20:05:41 crc kubenswrapper[4183]: I0813 20:05:41.869956 4183 scope.go:117] "RemoveContainer" containerID="ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce" Aug 13 20:05:42 crc kubenswrapper[4183]: I0813 20:05:42.828248 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Aug 13 20:05:42 crc kubenswrapper[4183]: I0813 20:05:42.878397 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Aug 13 20:05:42 crc kubenswrapper[4183]: I0813 20:05:42.880586 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerStarted","Data":"c97fff743291294c8c2671715b19a9576ef9f434134cc0f02b695dbc32284d86"} Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.209312 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.209366 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.884551 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.897724 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.900136 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.902595 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"b7b2fb66a37e8c7191a914067fe2f9036112a584c9ca7714873849353733889a"} Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.278440 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.316338 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.541374 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.817110 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.916519 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.918705 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.920139 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"b03552e2b35c92b59eb334cf496ac9d89324ae268cf17ae601bd0d6a94df8289"} Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.013856 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.089826 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podStartSLOduration=304.089658085 podStartE2EDuration="5m4.089658085s" podCreationTimestamp="2025-08-13 20:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:45.042200056 +0000 UTC m=+1311.734864874" watchObservedRunningTime="2025-08-13 20:05:45.089658085 +0000 UTC m=+1311.782322903" Aug 13 20:05:45 crc kubenswrapper[4183]: E0813 20:05:45.250964 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722" Aug 13 20:05:45 crc kubenswrapper[4183]: E0813 20:05:45.251273 4183 kuberuntime_manager.go:1262] container &Container{Name:service-ca-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d,Command:[service-ca-operator operator],Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d9vhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-ca-operator-546b4f8984-pwccz_openshift-service-ca-operator(6d67253e-2acd-4bc1-8185-793587da4f17): CreateContainerError: context deadline exceeded Aug 13 20:05:45 crc kubenswrapper[4183]: E0813 20:05:45.251332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.327881 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.665239 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.665483 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.901482 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:45 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:45 crc kubenswrapper[4183]: > Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.927429 4183 scope.go:117] "RemoveContainer" containerID="de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc" Aug 13 20:05:46 crc kubenswrapper[4183]: I0813 20:05:46.596218 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]log ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 20:05:46 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 20:05:46 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:05:46 crc kubenswrapper[4183]: healthz check failed Aug 13 20:05:46 crc kubenswrapper[4183]: I0813 20:05:46.596345 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:05:46 crc kubenswrapper[4183]: I0813 20:05:46.938478 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerStarted","Data":"7bc73c64b9d7e197b77d0f43ab147a148818682c82020be549d82802a07420f4"} Aug 13 20:05:48 crc kubenswrapper[4183]: I0813 20:05:48.956385 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:05:49 crc kubenswrapper[4183]: I0813 20:05:49.169157 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:05:49 crc kubenswrapper[4183]: I0813 20:05:49.521961 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Aug 13 20:05:50 crc kubenswrapper[4183]: I0813 20:05:50.699518 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:50 crc kubenswrapper[4183]: I0813 20:05:50.716124 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:50 crc kubenswrapper[4183]: I0813 20:05:50.778479 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:50 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:50 crc kubenswrapper[4183]: > Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.716496 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.718307 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.718444 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.718554 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.718680 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.748040 4183 scope.go:117] "RemoveContainer" containerID="47fe4a48f20f31be64ae9b101ef8f82942a11a5dc253da7cd8d82bea357cc9c7" Aug 13 20:05:55 crc kubenswrapper[4183]: I0813 20:05:55.816884 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:55 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:55 crc kubenswrapper[4183]: > Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.068190 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-10-retry-1-crc"] Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.070513 4183 topology_manager.go:215] "Topology Admit Handler" podUID="dc02677d-deed-4cc9-bb8c-0dd300f83655" podNamespace="openshift-kube-controller-manager" podName="installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: E0813 20:05:57.072133 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bf055e84f32193b9c1c21b0c34a61f01" containerName="startup-monitor" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.072184 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf055e84f32193b9c1c21b0c34a61f01" containerName="startup-monitor" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.072369 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf055e84f32193b9c1c21b0c34a61f01" containerName="startup-monitor" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.073129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.078051 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.080371 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dl9g2" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.117579 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-10-retry-1-crc"] Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.165299 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.165405 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.165432 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.266818 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.267099 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.267202 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.267699 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.267745 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.298670 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.402598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.861827 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.862628 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" containerID="cri-o://b7b2fb66a37e8c7191a914067fe2f9036112a584c9ca7714873849353733889a" gracePeriod=90 Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.862709 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://b03552e2b35c92b59eb334cf496ac9d89324ae268cf17ae601bd0d6a94df8289" gracePeriod=90 Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.989886 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-10-retry-1-crc"] Aug 13 20:05:58 crc kubenswrapper[4183]: I0813 20:05:58.042959 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" event={"ID":"dc02677d-deed-4cc9-bb8c-0dd300f83655","Type":"ContainerStarted","Data":"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec"} Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.055571 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.056695 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.058388 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="b03552e2b35c92b59eb334cf496ac9d89324ae268cf17ae601bd0d6a94df8289" exitCode=0 Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.058470 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"b03552e2b35c92b59eb334cf496ac9d89324ae268cf17ae601bd0d6a94df8289"} Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.058521 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.795340 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.911750 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.071854 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" event={"ID":"dc02677d-deed-4cc9-bb8c-0dd300f83655","Type":"ContainerStarted","Data":"6cc839079ff04a5b6cb4524dc6e36a89fd8caab9bf6a552eeffb557088851619"} Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.076769 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.676057 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]log ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]etcd-readiness ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:06:00 crc kubenswrapper[4183]: [-]shutdown failed: reason withheld Aug 13 20:06:00 crc kubenswrapper[4183]: readyz check failed Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.676494 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.676601 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.711960 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" podStartSLOduration=3.711887332 podStartE2EDuration="3.711887332s" podCreationTimestamp="2025-08-13 20:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:06:00.10385754 +0000 UTC m=+1326.796522368" watchObservedRunningTime="2025-08-13 20:06:00.711887332 +0000 UTC m=+1327.404552310" Aug 13 20:06:04 crc kubenswrapper[4183]: I0813 20:06:04.845332 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 20:06:04 crc kubenswrapper[4183]: I0813 20:06:04.971234 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 20:06:05 crc kubenswrapper[4183]: I0813 20:06:05.676342 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]log ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]etcd-readiness ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:06:05 crc kubenswrapper[4183]: [-]shutdown failed: reason withheld Aug 13 20:06:05 crc kubenswrapper[4183]: readyz check failed Aug 13 20:06:05 crc kubenswrapper[4183]: I0813 20:06:05.676435 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:06:06 crc kubenswrapper[4183]: I0813 20:06:06.907656 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:06 crc kubenswrapper[4183]: I0813 20:06:06.913074 4183 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:06 crc kubenswrapper[4183]: I0813 20:06:06.994135 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" containerID="cri-o://1ce82b64b98820f650cc613d542e0f0643d32ba3d661ee198711362ba0c99f8b" gracePeriod=15 Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.146170 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/1.log" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.147353 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/0.log" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.147427 4183 generic.go:334] "Generic (PLEG): container finished" podID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerID="1ce82b64b98820f650cc613d542e0f0643d32ba3d661ee198711362ba0c99f8b" exitCode=2 Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.147460 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerDied","Data":"1ce82b64b98820f650cc613d542e0f0643d32ba3d661ee198711362ba0c99f8b"} Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.147512 4183 scope.go:117] "RemoveContainer" containerID="bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.475603 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/1.log" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.475695 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.528768 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.529095 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.529400 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.529551 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.530391 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.530572 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjq9b\" (UniqueName: \"kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.531014 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.548624 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config" (OuterVolumeSpecName: "console-config") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.548824 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.548848 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.549462 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca" (OuterVolumeSpecName: "service-ca") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.554526 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.555144 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.555501 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b" (OuterVolumeSpecName: "kube-api-access-hjq9b") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "kube-api-access-hjq9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633186 4183 reconciler_common.go:300] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633267 4183 reconciler_common.go:300] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633282 4183 reconciler_common.go:300] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633293 4183 reconciler_common.go:300] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633306 4183 reconciler_common.go:300] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633316 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hjq9b\" (UniqueName: \"kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633327 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.155627 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/1.log" Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.155961 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.155971 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerDied","Data":"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7"} Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.156053 4183 scope.go:117] "RemoveContainer" containerID="1ce82b64b98820f650cc613d542e0f0643d32ba3d661ee198711362ba0c99f8b" Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.264684 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.270602 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:06:09 crc kubenswrapper[4183]: I0813 20:06:09.219349 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" path="/var/lib/kubelet/pods/384ed0e8-86e4-42df-bd2c-604c1f536a15/volumes" Aug 13 20:06:10 crc kubenswrapper[4183]: I0813 20:06:10.675650 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]log ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]etcd-readiness ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:06:10 crc kubenswrapper[4183]: [-]shutdown failed: reason withheld Aug 13 20:06:10 crc kubenswrapper[4183]: readyz check failed Aug 13 20:06:10 crc kubenswrapper[4183]: I0813 20:06:10.676308 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:06:14 crc kubenswrapper[4183]: I0813 20:06:14.718261 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:15 crc kubenswrapper[4183]: I0813 20:06:15.666176 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:15 crc kubenswrapper[4183]: I0813 20:06:15.666751 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:20 crc kubenswrapper[4183]: I0813 20:06:20.666389 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:20 crc kubenswrapper[4183]: I0813 20:06:20.666979 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:25 crc kubenswrapper[4183]: I0813 20:06:25.666823 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:25 crc kubenswrapper[4183]: I0813 20:06:25.667491 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:30 crc kubenswrapper[4183]: I0813 20:06:30.666322 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:30 crc kubenswrapper[4183]: I0813 20:06:30.667066 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:30 crc kubenswrapper[4183]: I0813 20:06:30.704832 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmwfn"] Aug 13 20:06:30 crc kubenswrapper[4183]: I0813 20:06:30.705725 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" containerID="cri-o://2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467" gracePeriod=2 Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.291244 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.336637 4183 generic.go:334] "Generic (PLEG): container finished" podID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerID="2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467" exitCode=0 Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.336726 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerDied","Data":"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467"} Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.336770 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerDied","Data":"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7"} Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.336890 4183 scope.go:117] "RemoveContainer" containerID="2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.336854 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.399059 4183 scope.go:117] "RemoveContainer" containerID="5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.400918 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content\") pod \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.401034 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities\") pod \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.401135 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.407107 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities" (OuterVolumeSpecName: "utilities") pod "9ad279b4-d9dc-42a8-a1c8-a002bd063482" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.418403 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.418835 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" containerID="cri-o://a39a002d95a82ae963b46c8196dfed935c199e471be64946be7406b3b02562c9" gracePeriod=2 Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.460514 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp" (OuterVolumeSpecName: "kube-api-access-r7dbp") pod "9ad279b4-d9dc-42a8-a1c8-a002bd063482" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482"). InnerVolumeSpecName "kube-api-access-r7dbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.506106 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.506186 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.676153 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ad279b4-d9dc-42a8-a1c8-a002bd063482" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.710297 4183 scope.go:117] "RemoveContainer" containerID="1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.713096 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.865597 4183 scope.go:117] "RemoveContainer" containerID="2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467" Aug 13 20:06:31 crc kubenswrapper[4183]: E0813 20:06:31.866587 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467\": container with ID starting with 2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467 not found: ID does not exist" containerID="2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.866673 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467"} err="failed to get container status \"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467\": rpc error: code = NotFound desc = could not find container \"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467\": container with ID starting with 2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467 not found: ID does not exist" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.866689 4183 scope.go:117] "RemoveContainer" containerID="5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a" Aug 13 20:06:31 crc kubenswrapper[4183]: E0813 20:06:31.867610 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a\": container with ID starting with 5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a not found: ID does not exist" containerID="5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.867833 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a"} err="failed to get container status \"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a\": rpc error: code = NotFound desc = could not find container \"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a\": container with ID starting with 5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a not found: ID does not exist" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.867857 4183 scope.go:117] "RemoveContainer" containerID="1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3" Aug 13 20:06:31 crc kubenswrapper[4183]: E0813 20:06:31.868437 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3\": container with ID starting with 1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3 not found: ID does not exist" containerID="1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.868469 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3"} err="failed to get container status \"1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3\": rpc error: code = NotFound desc = could not find container \"1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3\": container with ID starting with 1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3 not found: ID does not exist" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.022861 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmwfn"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.079232 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmwfn"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.143688 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.144333 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" gracePeriod=30 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.144370 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" containerID="cri-o://2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" gracePeriod=30 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.144341 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" gracePeriod=30 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.144696 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" gracePeriod=30 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.149628 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150313 4183 topology_manager.go:215] "Topology Admit Handler" podUID="56d9256d8ee968b89d58cda59af60969" podNamespace="openshift-kube-controller-manager" podName="kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150575 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150679 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150738 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150753 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150766 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150828 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150845 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150855 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150900 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150915 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150928 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-cert-syncer" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150938 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-cert-syncer" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150965 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150975 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150986 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150998 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151010 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="extract-content" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151022 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="extract-content" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151035 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151044 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151059 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-recovery-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151069 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-recovery-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151081 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="extract-utilities" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151090 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="extract-utilities" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151384 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151408 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151419 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151430 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151446 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-recovery-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151459 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151472 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151486 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151499 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151512 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151523 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151534 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151549 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-cert-syncer" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151685 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151697 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151714 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151723 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151744 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151755 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.154246 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.154457 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.154473 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.220156 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.220710 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.324255 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.324653 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.324758 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.325074 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.377766 4183 generic.go:334] "Generic (PLEG): container finished" podID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerID="a39a002d95a82ae963b46c8196dfed935c199e471be64946be7406b3b02562c9" exitCode=0 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.380354 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerDied","Data":"a39a002d95a82ae963b46c8196dfed935c199e471be64946be7406b3b02562c9"} Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.513021 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.565031 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.567986 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager-cert-syncer/0.log" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.585559 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="2eb2b200bca0d10cf0fe16fb7c0caf80" podUID="56d9256d8ee968b89d58cda59af60969" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.587046 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager/0.log" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.587198 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.610520 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.613113 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" containerID="cri-o://81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2" gracePeriod=2 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.628478 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir\") pod \"2eb2b200bca0d10cf0fe16fb7c0caf80\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.628580 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content\") pod \"6db26b71-4e04-4688-a0c0-00e06e8c888d\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.628636 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir\") pod \"2eb2b200bca0d10cf0fe16fb7c0caf80\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.628668 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities\") pod \"6db26b71-4e04-4688-a0c0-00e06e8c888d\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.628712 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzb4s\" (UniqueName: \"kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s\") pod \"6db26b71-4e04-4688-a0c0-00e06e8c888d\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.630710 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "2eb2b200bca0d10cf0fe16fb7c0caf80" (UID: "2eb2b200bca0d10cf0fe16fb7c0caf80"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.631118 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "2eb2b200bca0d10cf0fe16fb7c0caf80" (UID: "2eb2b200bca0d10cf0fe16fb7c0caf80"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.632228 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities" (OuterVolumeSpecName: "utilities") pod "6db26b71-4e04-4688-a0c0-00e06e8c888d" (UID: "6db26b71-4e04-4688-a0c0-00e06e8c888d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.646752 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s" (OuterVolumeSpecName: "kube-api-access-nzb4s") pod "6db26b71-4e04-4688-a0c0-00e06e8c888d" (UID: "6db26b71-4e04-4688-a0c0-00e06e8c888d"). InnerVolumeSpecName "kube-api-access-nzb4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.746159 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.746221 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.746236 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.746252 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nzb4s\" (UniqueName: \"kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.769860 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.770273 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" containerID="cri-o://844f180a492dff97326b5ea50f79dcbfc132e7edaccd1723d8997c38fb3bf568" gracePeriod=2 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.808083 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="2eb2b200bca0d10cf0fe16fb7c0caf80" podUID="56d9256d8ee968b89d58cda59af60969" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.223896 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" path="/var/lib/kubelet/pods/2eb2b200bca0d10cf0fe16fb7c0caf80/volumes" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.231017 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.237370 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" path="/var/lib/kubelet/pods/9ad279b4-d9dc-42a8-a1c8-a002bd063482/volumes" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.386715 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities\") pod \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.386913 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content\") pod \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.387039 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n59fs\" (UniqueName: \"kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs\") pod \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.389317 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities" (OuterVolumeSpecName: "utilities") pod "ccdf38cf-634a-41a2-9c8b-74bb86af80a7" (UID: "ccdf38cf-634a-41a2-9c8b-74bb86af80a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.406403 4183 generic.go:334] "Generic (PLEG): container finished" podID="dc02677d-deed-4cc9-bb8c-0dd300f83655" containerID="6cc839079ff04a5b6cb4524dc6e36a89fd8caab9bf6a552eeffb557088851619" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.407500 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" event={"ID":"dc02677d-deed-4cc9-bb8c-0dd300f83655","Type":"ContainerDied","Data":"6cc839079ff04a5b6cb4524dc6e36a89fd8caab9bf6a552eeffb557088851619"} Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.414144 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.414560 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs" (OuterVolumeSpecName: "kube-api-access-n59fs") pod "ccdf38cf-634a-41a2-9c8b-74bb86af80a7" (UID: "ccdf38cf-634a-41a2-9c8b-74bb86af80a7"). InnerVolumeSpecName "kube-api-access-n59fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.415194 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerDied","Data":"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45"} Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.415606 4183 scope.go:117] "RemoveContainer" containerID="a39a002d95a82ae963b46c8196dfed935c199e471be64946be7406b3b02562c9" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.447434 4183 generic.go:334] "Generic (PLEG): container finished" podID="bb917686-edfb-4158-86ad-6fce0abec64c" containerID="844f180a492dff97326b5ea50f79dcbfc132e7edaccd1723d8997c38fb3bf568" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.448262 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerDied","Data":"844f180a492dff97326b5ea50f79dcbfc132e7edaccd1723d8997c38fb3bf568"} Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.482407 4183 generic.go:334] "Generic (PLEG): container finished" podID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerID="81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.482857 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.482914 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerDied","Data":"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2"} Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.483860 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerDied","Data":"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350"} Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.489756 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.490010 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-n59fs\" (UniqueName: \"kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.501195 4183 scope.go:117] "RemoveContainer" containerID="5dfab3908e38ec4c78ee676439e402432e22c1d28963eb816627f094e1f7ffed" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.509593 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.538016 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager-cert-syncer/0.log" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.548408 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager/0.log" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.548477 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.548491 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.548506 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.548557 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" exitCode=2 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.550728 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.605947 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.611004 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="2eb2b200bca0d10cf0fe16fb7c0caf80" podUID="56d9256d8ee968b89d58cda59af60969" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.651167 4183 scope.go:117] "RemoveContainer" containerID="d14340d88bbcb0bdafcdb676bdd527fc02a2314081fa0355609f2faf4fe6c57a" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.699327 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwzcr\" (UniqueName: \"kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr\") pod \"bb917686-edfb-4158-86ad-6fce0abec64c\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.699537 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities\") pod \"bb917686-edfb-4158-86ad-6fce0abec64c\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.699654 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content\") pod \"bb917686-edfb-4158-86ad-6fce0abec64c\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.703280 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities" (OuterVolumeSpecName: "utilities") pod "bb917686-edfb-4158-86ad-6fce0abec64c" (UID: "bb917686-edfb-4158-86ad-6fce0abec64c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.713128 4183 scope.go:117] "RemoveContainer" containerID="81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.715474 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr" (OuterVolumeSpecName: "kube-api-access-mwzcr") pod "bb917686-edfb-4158-86ad-6fce0abec64c" (UID: "bb917686-edfb-4158-86ad-6fce0abec64c"). InnerVolumeSpecName "kube-api-access-mwzcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.766120 4183 scope.go:117] "RemoveContainer" containerID="be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.809106 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mwzcr\" (UniqueName: \"kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.809204 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.876493 4183 scope.go:117] "RemoveContainer" containerID="aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.951741 4183 scope.go:117] "RemoveContainer" containerID="81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2" Aug 13 20:06:33 crc kubenswrapper[4183]: E0813 20:06:33.956229 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2\": container with ID starting with 81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2 not found: ID does not exist" containerID="81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.956396 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2"} err="failed to get container status \"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2\": rpc error: code = NotFound desc = could not find container \"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2\": container with ID starting with 81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2 not found: ID does not exist" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.956556 4183 scope.go:117] "RemoveContainer" containerID="be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24" Aug 13 20:06:33 crc kubenswrapper[4183]: E0813 20:06:33.957238 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24\": container with ID starting with be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24 not found: ID does not exist" containerID="be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.957296 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24"} err="failed to get container status \"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24\": rpc error: code = NotFound desc = could not find container \"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24\": container with ID starting with be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24 not found: ID does not exist" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.957317 4183 scope.go:117] "RemoveContainer" containerID="aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101" Aug 13 20:06:33 crc kubenswrapper[4183]: E0813 20:06:33.957667 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101\": container with ID starting with aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101 not found: ID does not exist" containerID="aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.957698 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101"} err="failed to get container status \"aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101\": rpc error: code = NotFound desc = could not find container \"aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101\": container with ID starting with aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101 not found: ID does not exist" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.957715 4183 scope.go:117] "RemoveContainer" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.028438 4183 scope.go:117] "RemoveContainer" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.113426 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.115441 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6db26b71-4e04-4688-a0c0-00e06e8c888d" (UID: "6db26b71-4e04-4688-a0c0-00e06e8c888d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.124953 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.127435 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb917686-edfb-4158-86ad-6fce0abec64c" (UID: "bb917686-edfb-4158-86ad-6fce0abec64c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.190249 4183 scope.go:117] "RemoveContainer" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.226137 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.230289 4183 scope.go:117] "RemoveContainer" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.266904 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.267957 4183 topology_manager.go:215] "Topology Admit Handler" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" podNamespace="openshift-marketplace" podName="redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.268649 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269046 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269069 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269076 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269091 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269100 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269114 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269122 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269136 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269143 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269155 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269164 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269178 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269186 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269219 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269227 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269237 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269244 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269398 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269419 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269428 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.271124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.302167 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.332213 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.448725 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckbzg\" (UniqueName: \"kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.448842 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.448906 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.481760 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.515334 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.551308 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ckbzg\" (UniqueName: \"kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.551391 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.551418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.552235 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.553105 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.610158 4183 scope.go:117] "RemoveContainer" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.625273 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckbzg\" (UniqueName: \"kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.626101 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": container with ID starting with 2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa not found: ID does not exist" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.626376 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa"} err="failed to get container status \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": rpc error: code = NotFound desc = could not find container \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": container with ID starting with 2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.626490 4183 scope.go:117] "RemoveContainer" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.631271 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": container with ID starting with 2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a not found: ID does not exist" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.631345 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a"} err="failed to get container status \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": rpc error: code = NotFound desc = could not find container \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": container with ID starting with 2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.631366 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.631658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.641227 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": container with ID starting with d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc not found: ID does not exist" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.641315 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} err="failed to get container status \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": rpc error: code = NotFound desc = could not find container \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": container with ID starting with d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.641344 4183 scope.go:117] "RemoveContainer" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.642564 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": container with ID starting with 8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc not found: ID does not exist" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.642589 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc"} err="failed to get container status \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": rpc error: code = NotFound desc = could not find container \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": container with ID starting with 8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.642599 4183 scope.go:117] "RemoveContainer" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.642761 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerDied","Data":"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761"} Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.642974 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.645946 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": container with ID starting with ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93 not found: ID does not exist" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.646259 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93"} err="failed to get container status \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": rpc error: code = NotFound desc = could not find container \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": container with ID starting with ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.646347 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.650081 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": container with ID starting with 28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509 not found: ID does not exist" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.650302 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} err="failed to get container status \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": rpc error: code = NotFound desc = could not find container \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": container with ID starting with 28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.650482 4183 scope.go:117] "RemoveContainer" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.652664 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa"} err="failed to get container status \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": rpc error: code = NotFound desc = could not find container \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": container with ID starting with 2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.653002 4183 scope.go:117] "RemoveContainer" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.668983 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a"} err="failed to get container status \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": rpc error: code = NotFound desc = could not find container \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": container with ID starting with 2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.669054 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.676139 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} err="failed to get container status \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": rpc error: code = NotFound desc = could not find container \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": container with ID starting with d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.676184 4183 scope.go:117] "RemoveContainer" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.689053 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc"} err="failed to get container status \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": rpc error: code = NotFound desc = could not find container \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": container with ID starting with 8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.689169 4183 scope.go:117] "RemoveContainer" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.690944 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93"} err="failed to get container status \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": rpc error: code = NotFound desc = could not find container \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": container with ID starting with ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.691014 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.694191 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} err="failed to get container status \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": rpc error: code = NotFound desc = could not find container \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": container with ID starting with 28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.694252 4183 scope.go:117] "RemoveContainer" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.695225 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa"} err="failed to get container status \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": rpc error: code = NotFound desc = could not find container \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": container with ID starting with 2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.695266 4183 scope.go:117] "RemoveContainer" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.705911 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a"} err="failed to get container status \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": rpc error: code = NotFound desc = could not find container \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": container with ID starting with 2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.705945 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.706983 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} err="failed to get container status \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": rpc error: code = NotFound desc = could not find container \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": container with ID starting with d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.707016 4183 scope.go:117] "RemoveContainer" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.707643 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc"} err="failed to get container status \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": rpc error: code = NotFound desc = could not find container \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": container with ID starting with 8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.707677 4183 scope.go:117] "RemoveContainer" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.713412 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93"} err="failed to get container status \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": rpc error: code = NotFound desc = could not find container \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": container with ID starting with ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.713475 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.716474 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} err="failed to get container status \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": rpc error: code = NotFound desc = could not find container \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": container with ID starting with 28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.716517 4183 scope.go:117] "RemoveContainer" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.722234 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa"} err="failed to get container status \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": rpc error: code = NotFound desc = could not find container \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": container with ID starting with 2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.722283 4183 scope.go:117] "RemoveContainer" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.733247 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a"} err="failed to get container status \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": rpc error: code = NotFound desc = could not find container \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": container with ID starting with 2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.733349 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.739469 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ccdf38cf-634a-41a2-9c8b-74bb86af80a7" (UID: "ccdf38cf-634a-41a2-9c8b-74bb86af80a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.741499 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} err="failed to get container status \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": rpc error: code = NotFound desc = could not find container \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": container with ID starting with d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.741566 4183 scope.go:117] "RemoveContainer" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.742463 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc"} err="failed to get container status \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": rpc error: code = NotFound desc = could not find container \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": container with ID starting with 8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.742497 4183 scope.go:117] "RemoveContainer" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.745275 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93"} err="failed to get container status \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": rpc error: code = NotFound desc = could not find container \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": container with ID starting with ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.745312 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.746895 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} err="failed to get container status \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": rpc error: code = NotFound desc = could not find container \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": container with ID starting with 28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.746915 4183 scope.go:117] "RemoveContainer" containerID="844f180a492dff97326b5ea50f79dcbfc132e7edaccd1723d8997c38fb3bf568" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.767764 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.767926 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.817313 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.004094 4183 scope.go:117] "RemoveContainer" containerID="c3dbff7f4c3117da13658584d3a507d50302df8be0d31802f8e4e5b93ddec694" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.109002 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.135918 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.195435 4183 scope.go:117] "RemoveContainer" containerID="1e5547d2ec134d919f281661be1d8428aa473dba5709d51d784bbe4bf125231a" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.225423 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" path="/var/lib/kubelet/pods/6db26b71-4e04-4688-a0c0-00e06e8c888d/volumes" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.228259 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" path="/var/lib/kubelet/pods/bb917686-edfb-4158-86ad-6fce0abec64c/volumes" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.229735 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" path="/var/lib/kubelet/pods/ccdf38cf-634a-41a2-9c8b-74bb86af80a7/volumes" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.622105 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.666846 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.667018 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.705030 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" event={"ID":"dc02677d-deed-4cc9-bb8c-0dd300f83655","Type":"ContainerDied","Data":"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec"} Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.705097 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.705171 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.714641 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock\") pod \"dc02677d-deed-4cc9-bb8c-0dd300f83655\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.714768 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir\") pod \"dc02677d-deed-4cc9-bb8c-0dd300f83655\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.715053 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access\") pod \"dc02677d-deed-4cc9-bb8c-0dd300f83655\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.716059 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock" (OuterVolumeSpecName: "var-lock") pod "dc02677d-deed-4cc9-bb8c-0dd300f83655" (UID: "dc02677d-deed-4cc9-bb8c-0dd300f83655"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.716115 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dc02677d-deed-4cc9-bb8c-0dd300f83655" (UID: "dc02677d-deed-4cc9-bb8c-0dd300f83655"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.739478 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.739660 4183 topology_manager.go:215] "Topology Admit Handler" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" podNamespace="openshift-marketplace" podName="certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.740078 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dc02677d-deed-4cc9-bb8c-0dd300f83655" (UID: "dc02677d-deed-4cc9-bb8c-0dd300f83655"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:35 crc kubenswrapper[4183]: E0813 20:06:35.752916 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="dc02677d-deed-4cc9-bb8c-0dd300f83655" containerName="installer" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.752975 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc02677d-deed-4cc9-bb8c-0dd300f83655" containerName="installer" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.753232 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc02677d-deed-4cc9-bb8c-0dd300f83655" containerName="installer" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.754313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.802645 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.816953 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.817278 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.817663 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqx8w\" (UniqueName: \"kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.817921 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.817940 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.817955 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.919704 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nqx8w\" (UniqueName: \"kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.920273 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.920436 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.921238 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.921268 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.926700 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.967949 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqx8w\" (UniqueName: \"kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.090066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.638373 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:06:36 crc kubenswrapper[4183]: W0813 20:06:36.663759 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5391dc5d_0f00_4464_b617_b164e2f9b77a.slice/crio-93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d WatchSource:0}: Error finding container 93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d: Status 404 returned error can't find the container with id 93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.722331 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.722712 4183 topology_manager.go:215] "Topology Admit Handler" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" podNamespace="openshift-marketplace" podName="redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.724295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.733585 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.733685 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.733727 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4g78\" (UniqueName: \"kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.740443 4183 generic.go:334] "Generic (PLEG): container finished" podID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerID="ba4e7e607991d317206ebde80c8cb2e26997cbbc08e8b4f17e61b221f795d438" exitCode=0 Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.740556 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerDied","Data":"ba4e7e607991d317206ebde80c8cb2e26997cbbc08e8b4f17e61b221f795d438"} Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.740590 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerStarted","Data":"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d"} Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.744770 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerStarted","Data":"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d"} Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.834905 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.836955 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.837483 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-h4g78\" (UniqueName: \"kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.836767 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.837421 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.890610 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4g78\" (UniqueName: \"kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.896240 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:06:37 crc kubenswrapper[4183]: I0813 20:06:37.151050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:37 crc kubenswrapper[4183]: I0813 20:06:37.657129 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:06:37 crc kubenswrapper[4183]: W0813 20:06:37.678370 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e1b407b_80a9_40d6_aa0b_a5ffb555c8ed.slice/crio-3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8 WatchSource:0}: Error finding container 3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8: Status 404 returned error can't find the container with id 3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8 Aug 13 20:06:37 crc kubenswrapper[4183]: I0813 20:06:37.752983 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerStarted","Data":"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8"} Aug 13 20:06:37 crc kubenswrapper[4183]: I0813 20:06:37.755721 4183 generic.go:334] "Generic (PLEG): container finished" podID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerID="d0410fb00ff1950c83008d849c88f9052caf868a3476a49f11cc841d25bf1215" exitCode=0 Aug 13 20:06:37 crc kubenswrapper[4183]: I0813 20:06:37.756002 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerDied","Data":"d0410fb00ff1950c83008d849c88f9052caf868a3476a49f11cc841d25bf1215"} Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.342086 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.342230 4183 topology_manager.go:215] "Topology Admit Handler" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" podNamespace="openshift-marketplace" podName="community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.343500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.393189 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.460305 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.460466 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.460712 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv6hl\" (UniqueName: \"kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.562320 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vv6hl\" (UniqueName: \"kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.562455 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.562501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.563335 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.563627 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.624249 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv6hl\" (UniqueName: \"kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.675174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.780855 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerStarted","Data":"8774ff62b19406788c10fedf068a0f954eca6a67f3db06bf9b50da1d5c7f38aa"} Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.785269 4183 generic.go:334] "Generic (PLEG): container finished" podID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerID="29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d" exitCode=0 Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.785411 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerDied","Data":"29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d"} Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.796367 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerStarted","Data":"35b65310d7cdfa6d3f8542bf95fcc97b0283ba68976893b228beafacea70e679"} Aug 13 20:06:39 crc kubenswrapper[4183]: I0813 20:06:39.382481 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:06:39 crc kubenswrapper[4183]: I0813 20:06:39.811895 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerStarted","Data":"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7"} Aug 13 20:06:40 crc kubenswrapper[4183]: I0813 20:06:40.666927 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:40 crc kubenswrapper[4183]: I0813 20:06:40.667427 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:40 crc kubenswrapper[4183]: I0813 20:06:40.822606 4183 generic.go:334] "Generic (PLEG): container finished" podID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerID="75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b" exitCode=0 Aug 13 20:06:40 crc kubenswrapper[4183]: I0813 20:06:40.822832 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerDied","Data":"75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b"} Aug 13 20:06:40 crc kubenswrapper[4183]: I0813 20:06:40.827595 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerStarted","Data":"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1"} Aug 13 20:06:41 crc kubenswrapper[4183]: I0813 20:06:41.835751 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerStarted","Data":"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d"} Aug 13 20:06:45 crc kubenswrapper[4183]: I0813 20:06:45.666543 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:45 crc kubenswrapper[4183]: I0813 20:06:45.667135 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:46 crc kubenswrapper[4183]: I0813 20:06:46.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:46 crc kubenswrapper[4183]: I0813 20:06:46.231273 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="df02f99a-b4f8-4711-aedf-964dcb4d3400" Aug 13 20:06:46 crc kubenswrapper[4183]: I0813 20:06:46.231314 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="df02f99a-b4f8-4711-aedf-964dcb4d3400" Aug 13 20:06:47 crc kubenswrapper[4183]: I0813 20:06:47.015557 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:49 crc kubenswrapper[4183]: I0813 20:06:49.218239 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:06:49 crc kubenswrapper[4183]: I0813 20:06:49.869394 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:49 crc kubenswrapper[4183]: I0813 20:06:49.913567 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.033314 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.668940 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.669135 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.717035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.723046 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.910383 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"56d9256d8ee968b89d58cda59af60969","Type":"ContainerStarted","Data":"a386295a4836609efa126cdad0f8da6cec9163b751ff142e15d9693c89cf9866"} Aug 13 20:06:51 crc kubenswrapper[4183]: I0813 20:06:51.343841 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:51 crc kubenswrapper[4183]: I0813 20:06:51.919581 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"56d9256d8ee968b89d58cda59af60969","Type":"ContainerStarted","Data":"4159ba877f8ff7e1e08f72bf3d12699149238f2597dfea0b4882ee6797fe2c98"} Aug 13 20:06:52 crc kubenswrapper[4183]: I0813 20:06:52.939619 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"56d9256d8ee968b89d58cda59af60969","Type":"ContainerStarted","Data":"6fac670aec99a6e895db54957107db545029859582d9e7bfff8bcb8b8323317b"} Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.719310 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.720070 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.720141 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.720171 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.720205 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Pending" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.847698 4183 scope.go:117] "RemoveContainer" containerID="3adbf9773c9dee772e1fae33ef3bfea1611715fe8502455203e764d46595a8bc" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.985710 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"56d9256d8ee968b89d58cda59af60969","Type":"ContainerStarted","Data":"be1e0c86831f89f585cd2c81563266389f6b99fe3a2b00e25563c193b7ae2289"} Aug 13 20:06:55 crc kubenswrapper[4183]: I0813 20:06:55.666286 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:55 crc kubenswrapper[4183]: I0813 20:06:55.666865 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:55 crc kubenswrapper[4183]: I0813 20:06:55.997314 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"56d9256d8ee968b89d58cda59af60969","Type":"ContainerStarted","Data":"844a16e08b8b6f6647fb07d6bae6657e732727da7ada45f1211b70ff85887202"} Aug 13 20:06:58 crc kubenswrapper[4183]: I0813 20:06:58.023164 4183 generic.go:334] "Generic (PLEG): container finished" podID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerID="35b65310d7cdfa6d3f8542bf95fcc97b0283ba68976893b228beafacea70e679" exitCode=0 Aug 13 20:06:58 crc kubenswrapper[4183]: I0813 20:06:58.023567 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerDied","Data":"35b65310d7cdfa6d3f8542bf95fcc97b0283ba68976893b228beafacea70e679"} Aug 13 20:06:59 crc kubenswrapper[4183]: I0813 20:06:59.164298 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=9.164227237 podStartE2EDuration="9.164227237s" podCreationTimestamp="2025-08-13 20:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:06:56.384585892 +0000 UTC m=+1383.077250730" watchObservedRunningTime="2025-08-13 20:06:59.164227237 +0000 UTC m=+1385.856892155" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.040353 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerStarted","Data":"ff7f35679861a611a5ba4e3c78554ac68d5f4553adfb22336409ae2267a78160"} Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.666357 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.667547 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.717568 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.718035 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.718195 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.718446 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.723382 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.760496 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.947442 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4txfd" podStartSLOduration=5.377419812 podStartE2EDuration="26.947382872s" podCreationTimestamp="2025-08-13 20:06:34 +0000 UTC" firstStartedPulling="2025-08-13 20:06:36.744736971 +0000 UTC m=+1363.437401649" lastFinishedPulling="2025-08-13 20:06:58.314699941 +0000 UTC m=+1385.007364709" observedRunningTime="2025-08-13 20:07:00.09942957 +0000 UTC m=+1386.792094548" watchObservedRunningTime="2025-08-13 20:07:00.947382872 +0000 UTC m=+1387.640047580" Aug 13 20:07:01 crc kubenswrapper[4183]: I0813 20:07:01.053138 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:02 crc kubenswrapper[4183]: I0813 20:07:02.062380 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.066363 4183 generic.go:334] "Generic (PLEG): container finished" podID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerID="8774ff62b19406788c10fedf068a0f954eca6a67f3db06bf9b50da1d5c7f38aa" exitCode=0 Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.066554 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerDied","Data":"8774ff62b19406788c10fedf068a0f954eca6a67f3db06bf9b50da1d5c7f38aa"} Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.225319 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.225450 4183 topology_manager.go:215] "Topology Admit Handler" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" podNamespace="openshift-kube-apiserver" podName="installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.241292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.252570 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-4kgh8" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.252718 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.371516 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.371593 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.371635 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.473588 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.473649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.473740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.473926 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.474127 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.460102 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.535456 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.632665 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.633258 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.771343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.907291 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.111193 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerStarted","Data":"d4e66bdfd9dd4a7f2d135310d101ff9f0390135dfa3cce9fda943b1c05565a80"} Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.183763 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cfdk8" podStartSLOduration=4.576405217 podStartE2EDuration="30.18370006s" podCreationTimestamp="2025-08-13 20:06:35 +0000 UTC" firstStartedPulling="2025-08-13 20:06:37.758363852 +0000 UTC m=+1364.451028550" lastFinishedPulling="2025-08-13 20:07:03.365658395 +0000 UTC m=+1390.058323393" observedRunningTime="2025-08-13 20:07:05.183269748 +0000 UTC m=+1391.875934756" watchObservedRunningTime="2025-08-13 20:07:05.18370006 +0000 UTC m=+1391.876364888" Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.402368 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.588097 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:05 crc kubenswrapper[4183]: W0813 20:07:05.615964 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod47a054e4_19c2_4c12_a054_fc5edc98978a.slice/crio-82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763 WatchSource:0}: Error finding container 82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763: Status 404 returned error can't find the container with id 82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763 Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.667290 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.667378 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:07:06 crc kubenswrapper[4183]: I0813 20:07:06.091326 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:06 crc kubenswrapper[4183]: I0813 20:07:06.091412 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:06 crc kubenswrapper[4183]: I0813 20:07:06.136054 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-11-crc" event={"ID":"47a054e4-19c2-4c12-a054-fc5edc98978a","Type":"ContainerStarted","Data":"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763"} Aug 13 20:07:06 crc kubenswrapper[4183]: I0813 20:07:06.550982 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:07:07 crc kubenswrapper[4183]: I0813 20:07:07.151422 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4txfd" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="registry-server" containerID="cri-o://ff7f35679861a611a5ba4e3c78554ac68d5f4553adfb22336409ae2267a78160" gracePeriod=2 Aug 13 20:07:07 crc kubenswrapper[4183]: I0813 20:07:07.152121 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-11-crc" event={"ID":"47a054e4-19c2-4c12-a054-fc5edc98978a","Type":"ContainerStarted","Data":"1e1a0d662b883dd47a8d67de1ea3251e342574fa602e1c0b8d1d61ebcdfcfb0c"} Aug 13 20:07:07 crc kubenswrapper[4183]: I0813 20:07:07.231709 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-11-crc" podStartSLOduration=5.231646296 podStartE2EDuration="5.231646296s" podCreationTimestamp="2025-08-13 20:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:07.229267578 +0000 UTC m=+1393.921932296" watchObservedRunningTime="2025-08-13 20:07:07.231646296 +0000 UTC m=+1393.924311034" Aug 13 20:07:07 crc kubenswrapper[4183]: I0813 20:07:07.286308 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cfdk8" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="registry-server" probeResult="failure" output=< Aug 13 20:07:07 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:07:07 crc kubenswrapper[4183]: > Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.192452 4183 generic.go:334] "Generic (PLEG): container finished" podID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerID="ff7f35679861a611a5ba4e3c78554ac68d5f4553adfb22336409ae2267a78160" exitCode=0 Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.194124 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerDied","Data":"ff7f35679861a611a5ba4e3c78554ac68d5f4553adfb22336409ae2267a78160"} Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.713376 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.890060 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content\") pod \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.891033 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities\") pod \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.891471 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckbzg\" (UniqueName: \"kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg\") pod \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.892132 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities" (OuterVolumeSpecName: "utilities") pod "af6c965e-9dc8-417a-aa1c-303a50ec9adc" (UID: "af6c965e-9dc8-417a-aa1c-303a50ec9adc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.011540 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg" (OuterVolumeSpecName: "kube-api-access-ckbzg") pod "af6c965e-9dc8-417a-aa1c-303a50ec9adc" (UID: "af6c965e-9dc8-417a-aa1c-303a50ec9adc"). InnerVolumeSpecName "kube-api-access-ckbzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.015756 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.015858 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ckbzg\" (UniqueName: \"kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.212389 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.225379 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af6c965e-9dc8-417a-aa1c-303a50ec9adc" (UID: "af6c965e-9dc8-417a-aa1c-303a50ec9adc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.226151 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerDied","Data":"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d"} Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.226223 4183 scope.go:117] "RemoveContainer" containerID="ff7f35679861a611a5ba4e3c78554ac68d5f4553adfb22336409ae2267a78160" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.320702 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.376467 4183 scope.go:117] "RemoveContainer" containerID="35b65310d7cdfa6d3f8542bf95fcc97b0283ba68976893b228beafacea70e679" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.456132 4183 scope.go:117] "RemoveContainer" containerID="ba4e7e607991d317206ebde80c8cb2e26997cbbc08e8b4f17e61b221f795d438" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.543745 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.571687 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:07:10 crc kubenswrapper[4183]: I0813 20:07:10.667045 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:07:10 crc kubenswrapper[4183]: I0813 20:07:10.667532 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:07:11 crc kubenswrapper[4183]: I0813 20:07:11.218191 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" path="/var/lib/kubelet/pods/af6c965e-9dc8-417a-aa1c-303a50ec9adc/volumes" Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.284216 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.285762 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="b7b2fb66a37e8c7191a914067fe2f9036112a584c9ca7714873849353733889a" exitCode=0 Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.285861 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"b7b2fb66a37e8c7191a914067fe2f9036112a584c9ca7714873849353733889a"} Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.285930 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.666054 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.666198 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.185655 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.295187 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58"} Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.295262 4183 scope.go:117] "RemoveContainer" containerID="b03552e2b35c92b59eb334cf496ac9d89324ae268cf17ae601bd0d6a94df8289" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.295293 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.302642 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.331041 4183 scope.go:117] "RemoveContainer" containerID="b7b2fb66a37e8c7191a914067fe2f9036112a584c9ca7714873849353733889a" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.370123 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.370703 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.370929 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.370972 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j2kj\" (UniqueName: \"kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371014 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371046 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371094 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371133 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371182 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371243 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371284 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371667 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371702 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371918 4183 reconciler_common.go:300] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371945 4183 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.372972 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.380871 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.384032 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj" (OuterVolumeSpecName: "kube-api-access-6j2kj") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "kube-api-access-6j2kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.395651 4183 scope.go:117] "RemoveContainer" containerID="ee7ad10446d56157471e17a6fd0a6c5ffb7cc6177a566dcf214a0b78b5502ef3" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.443578 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.473163 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6j2kj\" (UniqueName: \"kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.473231 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.473243 4183 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.514920 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.515325 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config" (OuterVolumeSpecName: "config") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.520955 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.574284 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.574332 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.574348 4183 reconciler_common.go:300] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.616269 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit" (OuterVolumeSpecName: "audit") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.619083 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.675731 4183 reconciler_common.go:300] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.675868 4183 reconciler_common.go:300] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.688930 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.777555 4183 reconciler_common.go:300] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:17 crc kubenswrapper[4183]: I0813 20:07:17.332901 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:07:17 crc kubenswrapper[4183]: I0813 20:07:17.349174 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:07:17 crc kubenswrapper[4183]: I0813 20:07:17.468404 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:07:18 crc kubenswrapper[4183]: I0813 20:07:18.313383 4183 generic.go:334] "Generic (PLEG): container finished" podID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerID="c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d" exitCode=0 Aug 13 20:07:18 crc kubenswrapper[4183]: I0813 20:07:18.313692 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cfdk8" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="registry-server" containerID="cri-o://d4e66bdfd9dd4a7f2d135310d101ff9f0390135dfa3cce9fda943b1c05565a80" gracePeriod=2 Aug 13 20:07:18 crc kubenswrapper[4183]: I0813 20:07:18.313898 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerDied","Data":"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d"} Aug 13 20:07:19 crc kubenswrapper[4183]: I0813 20:07:19.219654 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b23d6435-6431-4905-b41b-a517327385e5" path="/var/lib/kubelet/pods/b23d6435-6431-4905-b41b-a517327385e5/volumes" Aug 13 20:07:19 crc kubenswrapper[4183]: I0813 20:07:19.322545 4183 generic.go:334] "Generic (PLEG): container finished" podID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerID="d4e66bdfd9dd4a7f2d135310d101ff9f0390135dfa3cce9fda943b1c05565a80" exitCode=0 Aug 13 20:07:19 crc kubenswrapper[4183]: I0813 20:07:19.322644 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerDied","Data":"d4e66bdfd9dd4a7f2d135310d101ff9f0390135dfa3cce9fda943b1c05565a80"} Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.070461 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-7fc54b8dd7-d2bhp"] Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.076068 4183 topology_manager.go:215] "Topology Admit Handler" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" podNamespace="openshift-apiserver" podName="apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.076570 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="fix-audit-permissions" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.076593 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="fix-audit-permissions" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.076607 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.076615 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.076963 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="registry-server" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.076984 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="registry-server" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.076996 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077004 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077014 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="extract-content" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077058 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="extract-content" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077069 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="extract-utilities" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077077 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="extract-utilities" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077085 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077093 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077107 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077117 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077129 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077136 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077147 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077156 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077310 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077325 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077335 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077345 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077358 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077382 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077392 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="registry-server" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077402 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077411 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077420 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077523 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077532 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077547 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077555 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.078031 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.078358 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.078375 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.079939 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.079958 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.090318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.120717 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.143089 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.143954 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.144162 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.145585 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.152960 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7fc54b8dd7-d2bhp"] Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.163645 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174554 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174703 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174746 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174820 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174860 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174926 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174956 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174984 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.175008 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.175038 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.175065 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.179288 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.179574 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.187850 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.188868 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.189288 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.265979 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276394 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities\") pod \"5391dc5d-0f00-4464-b617-b164e2f9b77a\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276475 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqx8w\" (UniqueName: \"kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w\") pod \"5391dc5d-0f00-4464-b617-b164e2f9b77a\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276546 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content\") pod \"5391dc5d-0f00-4464-b617-b164e2f9b77a\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276674 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276718 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276838 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276864 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276918 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276949 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276991 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.277022 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.277062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.277092 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.277137 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.278049 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.279247 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.281050 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.281554 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.288187 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities" (OuterVolumeSpecName: "utilities") pod "5391dc5d-0f00-4464-b617-b164e2f9b77a" (UID: "5391dc5d-0f00-4464-b617-b164e2f9b77a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.290228 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.290477 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.294052 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.327843 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.329297 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.334041 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w" (OuterVolumeSpecName: "kube-api-access-nqx8w") pod "5391dc5d-0f00-4464-b617-b164e2f9b77a" (UID: "5391dc5d-0f00-4464-b617-b164e2f9b77a"). InnerVolumeSpecName "kube-api-access-nqx8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.339052 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.350518 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.373138 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerDied","Data":"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d"} Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.373208 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.373223 4183 scope.go:117] "RemoveContainer" containerID="d4e66bdfd9dd4a7f2d135310d101ff9f0390135dfa3cce9fda943b1c05565a80" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.380660 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.380710 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nqx8w\" (UniqueName: \"kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.390558 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerStarted","Data":"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194"} Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.451122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.503198 4183 scope.go:117] "RemoveContainer" containerID="8774ff62b19406788c10fedf068a0f954eca6a67f3db06bf9b50da1d5c7f38aa" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.539637 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p7svp" podStartSLOduration=4.757178704 podStartE2EDuration="42.539582856s" podCreationTimestamp="2025-08-13 20:06:38 +0000 UTC" firstStartedPulling="2025-08-13 20:06:40.825674156 +0000 UTC m=+1367.518338884" lastFinishedPulling="2025-08-13 20:07:18.608078248 +0000 UTC m=+1405.300743036" observedRunningTime="2025-08-13 20:07:20.539262247 +0000 UTC m=+1407.231927065" watchObservedRunningTime="2025-08-13 20:07:20.539582856 +0000 UTC m=+1407.232247584" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.665127 4183 scope.go:117] "RemoveContainer" containerID="d0410fb00ff1950c83008d849c88f9052caf868a3476a49f11cc841d25bf1215" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.767388 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5391dc5d-0f00-4464-b617-b164e2f9b77a" (UID: "5391dc5d-0f00-4464-b617-b164e2f9b77a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.790747 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:21 crc kubenswrapper[4183]: I0813 20:07:21.105498 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:07:21 crc kubenswrapper[4183]: I0813 20:07:21.120492 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:07:21 crc kubenswrapper[4183]: I0813 20:07:21.218084 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" path="/var/lib/kubelet/pods/5391dc5d-0f00-4464-b617-b164e2f9b77a/volumes" Aug 13 20:07:21 crc kubenswrapper[4183]: I0813 20:07:21.355501 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7fc54b8dd7-d2bhp"] Aug 13 20:07:21 crc kubenswrapper[4183]: W0813 20:07:21.374354 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41e8708a_e40d_4d28_846b_c52eda4d1755.slice/crio-2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8 WatchSource:0}: Error finding container 2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8: Status 404 returned error can't find the container with id 2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8 Aug 13 20:07:21 crc kubenswrapper[4183]: I0813 20:07:21.402828 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8"} Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.164391 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-11-crc"] Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.165017 4183 topology_manager.go:215] "Topology Admit Handler" podUID="1784282a-268d-4e44-a766-43281414e2dc" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: E0813 20:07:22.165221 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="registry-server" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.165237 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="registry-server" Aug 13 20:07:22 crc kubenswrapper[4183]: E0813 20:07:22.165257 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="extract-content" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.165266 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="extract-content" Aug 13 20:07:22 crc kubenswrapper[4183]: E0813 20:07:22.165282 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="extract-utilities" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.165291 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="extract-utilities" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.165468 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="registry-server" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.166174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.170125 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dl9g2" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.172343 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.201478 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-11-crc"] Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.210239 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.210690 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.312677 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.314463 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.315166 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.390261 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.411919 4183 generic.go:334] "Generic (PLEG): container finished" podID="41e8708a-e40d-4d28-846b-c52eda4d1755" containerID="58037de88507ed248b3008018dedcd37e5ffaf512da1efdad96531a3c165ed1d" exitCode=0 Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.412028 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerDied","Data":"58037de88507ed248b3008018dedcd37e5ffaf512da1efdad96531a3c165ed1d"} Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.499614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.031373 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-8-crc"] Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.032141 4183 topology_manager.go:215] "Topology Admit Handler" podUID="aca1f9ff-a685-4a78-b461-3931b757f754" podNamespace="openshift-kube-scheduler" podName="installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.033275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.063699 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-9ln8g" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.064197 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.127986 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-8-crc"] Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.137526 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.137624 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.137673 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.239627 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.239719 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.239817 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.239944 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.240035 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.318300 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.354371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.432220 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"ee9b6eb9461a74aad78cf9091cb08ce2922ebd34495ef62c73d64b9e4a16fd71"} Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.506287 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-11-crc"] Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.097175 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-8-crc"] Aug 13 20:07:24 crc kubenswrapper[4183]: W0813 20:07:24.115985 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaca1f9ff_a685_4a78_b461_3931b757f754.slice/crio-d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056 WatchSource:0}: Error finding container d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056: Status 404 returned error can't find the container with id d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056 Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.337192 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-11-crc"] Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.337768 4183 topology_manager.go:215] "Topology Admit Handler" podUID="a45bfab9-f78b-4d72-b5b7-903e60401124" podNamespace="openshift-kube-controller-manager" podName="installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.338997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.463611 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.463699 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.463837 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.476437 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"907e380361ba3b0228dd34236f32c08de85ddb289bd11f2a1c6bc95e5042248f"} Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.484451 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-11-crc"] Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.488919 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-crc" event={"ID":"aca1f9ff-a685-4a78-b461-3931b757f754","Type":"ContainerStarted","Data":"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056"} Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.498696 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-11-crc" event={"ID":"1784282a-268d-4e44-a766-43281414e2dc","Type":"ContainerStarted","Data":"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448"} Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.564857 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.565013 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.565046 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.566492 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.567348 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.700714 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.702078 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podStartSLOduration=87.702000825 podStartE2EDuration="1m27.702000825s" podCreationTimestamp="2025-08-13 20:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:24.689446405 +0000 UTC m=+1411.382111213" watchObservedRunningTime="2025-08-13 20:07:24.702000825 +0000 UTC m=+1411.394665613" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.963169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.452551 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.453223 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.522573 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-crc" event={"ID":"aca1f9ff-a685-4a78-b461-3931b757f754","Type":"ContainerStarted","Data":"f4f5bb6e58084ee7338acaefbb6a6dac0e4bc0801ff33d60707cf12512275cd2"} Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.527492 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-11-crc" event={"ID":"1784282a-268d-4e44-a766-43281414e2dc","Type":"ContainerStarted","Data":"5d491b38e707472af1834693c9fb2878d530381f767e9605a1f4536f559018ef"} Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.561588 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-8-crc" podStartSLOduration=3.561536929 podStartE2EDuration="3.561536929s" podCreationTimestamp="2025-08-13 20:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:25.553178059 +0000 UTC m=+1412.245842817" watchObservedRunningTime="2025-08-13 20:07:25.561536929 +0000 UTC m=+1412.254201967" Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.625133 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-11-crc" podStartSLOduration=3.62507817 podStartE2EDuration="3.62507817s" podCreationTimestamp="2025-08-13 20:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:25.606249501 +0000 UTC m=+1412.298914199" watchObservedRunningTime="2025-08-13 20:07:25.62507817 +0000 UTC m=+1412.317742888" Aug 13 20:07:26 crc kubenswrapper[4183]: I0813 20:07:26.189841 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-11-crc"] Aug 13 20:07:26 crc kubenswrapper[4183]: I0813 20:07:26.548853 4183 generic.go:334] "Generic (PLEG): container finished" podID="1784282a-268d-4e44-a766-43281414e2dc" containerID="5d491b38e707472af1834693c9fb2878d530381f767e9605a1f4536f559018ef" exitCode=0 Aug 13 20:07:26 crc kubenswrapper[4183]: I0813 20:07:26.549013 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-11-crc" event={"ID":"1784282a-268d-4e44-a766-43281414e2dc","Type":"ContainerDied","Data":"5d491b38e707472af1834693c9fb2878d530381f767e9605a1f4536f559018ef"} Aug 13 20:07:26 crc kubenswrapper[4183]: I0813 20:07:26.552214 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-11-crc" event={"ID":"a45bfab9-f78b-4d72-b5b7-903e60401124","Type":"ContainerStarted","Data":"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31"} Aug 13 20:07:27 crc kubenswrapper[4183]: I0813 20:07:27.561049 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-11-crc" event={"ID":"a45bfab9-f78b-4d72-b5b7-903e60401124","Type":"ContainerStarted","Data":"0028ed1d2f2b6b7f754d78a66fe28befb02bf632d29bbafaf101bd5630ca0ce6"} Aug 13 20:07:27 crc kubenswrapper[4183]: I0813 20:07:27.608386 4183 patch_prober.go:28] interesting pod/apiserver-7fc54b8dd7-d2bhp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]log ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 20:07:27 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 20:07:27 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:07:27 crc kubenswrapper[4183]: healthz check failed Aug 13 20:07:27 crc kubenswrapper[4183]: I0813 20:07:27.608501 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:07:27 crc kubenswrapper[4183]: I0813 20:07:27.610608 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-11-crc" podStartSLOduration=3.610560436 podStartE2EDuration="3.610560436s" podCreationTimestamp="2025-08-13 20:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:27.606207552 +0000 UTC m=+1414.298872320" watchObservedRunningTime="2025-08-13 20:07:27.610560436 +0000 UTC m=+1414.303225224" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.081528 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.181422 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access\") pod \"1784282a-268d-4e44-a766-43281414e2dc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.181506 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir\") pod \"1784282a-268d-4e44-a766-43281414e2dc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.181844 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1784282a-268d-4e44-a766-43281414e2dc" (UID: "1784282a-268d-4e44-a766-43281414e2dc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.192577 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1784282a-268d-4e44-a766-43281414e2dc" (UID: "1784282a-268d-4e44-a766-43281414e2dc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.282391 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.282458 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.571373 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-11-crc" event={"ID":"1784282a-268d-4e44-a766-43281414e2dc","Type":"ContainerDied","Data":"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448"} Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.571444 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.571490 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.675683 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.675947 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.055307 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-p7svp" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="registry-server" probeResult="failure" output=< Aug 13 20:07:30 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:07:30 crc kubenswrapper[4183]: > Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.476521 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.489692 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.785087 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.log" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.794980 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/1.log" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.796348 4183 generic.go:334] "Generic (PLEG): container finished" podID="7d51f445-054a-4e4f-a67b-a828f5a32511" containerID="200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44" exitCode=1 Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.796429 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerDied","Data":"200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44"} Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.796711 4183 scope.go:117] "RemoveContainer" containerID="5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.798757 4183 scope.go:117] "RemoveContainer" containerID="200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44" Aug 13 20:07:30 crc kubenswrapper[4183]: E0813 20:07:30.802263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-7d46d5bb6d-rrg6t_openshift-ingress-operator(7d51f445-054a-4e4f-a67b-a828f5a32511)\"" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 20:07:31 crc kubenswrapper[4183]: I0813 20:07:31.494135 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:31 crc kubenswrapper[4183]: I0813 20:07:31.496093 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-11-crc" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" containerName="installer" containerID="cri-o://1e1a0d662b883dd47a8d67de1ea3251e342574fa602e1c0b8d1d61ebcdfcfb0c" gracePeriod=30 Aug 13 20:07:31 crc kubenswrapper[4183]: I0813 20:07:31.806205 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.log" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.900684 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.900870 4183 topology_manager.go:215] "Topology Admit Handler" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" podNamespace="openshift-kube-apiserver" podName="installer-12-crc" Aug 13 20:07:33 crc kubenswrapper[4183]: E0813 20:07:33.901086 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1784282a-268d-4e44-a766-43281414e2dc" containerName="pruner" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.901101 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="1784282a-268d-4e44-a766-43281414e2dc" containerName="pruner" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.901254 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="1784282a-268d-4e44-a766-43281414e2dc" containerName="pruner" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.901686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.941547 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.977020 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.977103 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.977151 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.078045 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.078226 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.078263 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.078391 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.078512 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.108364 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.241523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.910347 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Aug 13 20:07:34 crc kubenswrapper[4183]: W0813 20:07:34.931394 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3557248c_8f70_4165_aa66_8df983e7e01a.slice/crio-afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309 WatchSource:0}: Error finding container afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309: Status 404 returned error can't find the container with id afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309 Aug 13 20:07:35 crc kubenswrapper[4183]: I0813 20:07:35.846426 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"3557248c-8f70-4165-aa66-8df983e7e01a","Type":"ContainerStarted","Data":"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309"} Aug 13 20:07:36 crc kubenswrapper[4183]: I0813 20:07:36.856537 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"3557248c-8f70-4165-aa66-8df983e7e01a","Type":"ContainerStarted","Data":"6b580ba621276e10a232c15451ffaeddf32ec7044f6dad05aaf5e3b8fd52877a"} Aug 13 20:07:37 crc kubenswrapper[4183]: I0813 20:07:37.071385 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=4.071312054 podStartE2EDuration="4.071312054s" podCreationTimestamp="2025-08-13 20:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:37.058583339 +0000 UTC m=+1423.751248147" watchObservedRunningTime="2025-08-13 20:07:37.071312054 +0000 UTC m=+1423.763976852" Aug 13 20:07:38 crc kubenswrapper[4183]: I0813 20:07:38.884289 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:38 crc kubenswrapper[4183]: I0813 20:07:38.888306 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-11-crc_47a054e4-19c2-4c12-a054-fc5edc98978a/installer/0.log" Aug 13 20:07:38 crc kubenswrapper[4183]: I0813 20:07:38.888691 4183 generic.go:334] "Generic (PLEG): container finished" podID="47a054e4-19c2-4c12-a054-fc5edc98978a" containerID="1e1a0d662b883dd47a8d67de1ea3251e342574fa602e1c0b8d1d61ebcdfcfb0c" exitCode=1 Aug 13 20:07:38 crc kubenswrapper[4183]: I0813 20:07:38.888738 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-11-crc" event={"ID":"47a054e4-19c2-4c12-a054-fc5edc98978a","Type":"ContainerDied","Data":"1e1a0d662b883dd47a8d67de1ea3251e342574fa602e1c0b8d1d61ebcdfcfb0c"} Aug 13 20:07:39 crc kubenswrapper[4183]: I0813 20:07:39.005603 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:39 crc kubenswrapper[4183]: I0813 20:07:39.899108 4183 generic.go:334] "Generic (PLEG): container finished" podID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerID="89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1" exitCode=0 Aug 13 20:07:39 crc kubenswrapper[4183]: I0813 20:07:39.899327 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerDied","Data":"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1"} Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.374439 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-11-crc_47a054e4-19c2-4c12-a054-fc5edc98978a/installer/0.log" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.374553 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.480018 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access\") pod \"47a054e4-19c2-4c12-a054-fc5edc98978a\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.480112 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir\") pod \"47a054e4-19c2-4c12-a054-fc5edc98978a\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.480227 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock\") pod \"47a054e4-19c2-4c12-a054-fc5edc98978a\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.480543 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock" (OuterVolumeSpecName: "var-lock") pod "47a054e4-19c2-4c12-a054-fc5edc98978a" (UID: "47a054e4-19c2-4c12-a054-fc5edc98978a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.481650 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "47a054e4-19c2-4c12-a054-fc5edc98978a" (UID: "47a054e4-19c2-4c12-a054-fc5edc98978a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.498477 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "47a054e4-19c2-4c12-a054-fc5edc98978a" (UID: "47a054e4-19c2-4c12-a054-fc5edc98978a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.535472 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.581704 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.581765 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.581835 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.929182 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-11-crc_47a054e4-19c2-4c12-a054-fc5edc98978a/installer/0.log" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.929511 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p7svp" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="registry-server" containerID="cri-o://346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194" gracePeriod=2 Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.929634 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.931381 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-11-crc" event={"ID":"47a054e4-19c2-4c12-a054-fc5edc98978a","Type":"ContainerDied","Data":"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763"} Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.931445 4183 scope.go:117] "RemoveContainer" containerID="1e1a0d662b883dd47a8d67de1ea3251e342574fa602e1c0b8d1d61ebcdfcfb0c" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.023616 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.038541 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.226148 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" path="/var/lib/kubelet/pods/47a054e4-19c2-4c12-a054-fc5edc98978a/volumes" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.536707 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.699273 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv6hl\" (UniqueName: \"kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl\") pod \"8518239d-8dab-48ac-a3c1-e775566b9bff\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.699872 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content\") pod \"8518239d-8dab-48ac-a3c1-e775566b9bff\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.700154 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities\") pod \"8518239d-8dab-48ac-a3c1-e775566b9bff\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.701044 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities" (OuterVolumeSpecName: "utilities") pod "8518239d-8dab-48ac-a3c1-e775566b9bff" (UID: "8518239d-8dab-48ac-a3c1-e775566b9bff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.706169 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl" (OuterVolumeSpecName: "kube-api-access-vv6hl") pod "8518239d-8dab-48ac-a3c1-e775566b9bff" (UID: "8518239d-8dab-48ac-a3c1-e775566b9bff"). InnerVolumeSpecName "kube-api-access-vv6hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.802685 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.803220 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vv6hl\" (UniqueName: \"kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.944462 4183 generic.go:334] "Generic (PLEG): container finished" podID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerID="346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194" exitCode=0 Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.944597 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.944665 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerDied","Data":"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194"} Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.946142 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerDied","Data":"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7"} Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.946204 4183 scope.go:117] "RemoveContainer" containerID="346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.953649 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerStarted","Data":"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012"} Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.981507 4183 scope.go:117] "RemoveContainer" containerID="c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.052749 4183 scope.go:117] "RemoveContainer" containerID="75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.152768 4183 scope.go:117] "RemoveContainer" containerID="346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194" Aug 13 20:07:42 crc kubenswrapper[4183]: E0813 20:07:42.154453 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194\": container with ID starting with 346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194 not found: ID does not exist" containerID="346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.154529 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194"} err="failed to get container status \"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194\": rpc error: code = NotFound desc = could not find container \"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194\": container with ID starting with 346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194 not found: ID does not exist" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.154541 4183 scope.go:117] "RemoveContainer" containerID="c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d" Aug 13 20:07:42 crc kubenswrapper[4183]: E0813 20:07:42.155376 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d\": container with ID starting with c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d not found: ID does not exist" containerID="c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.155404 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d"} err="failed to get container status \"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d\": rpc error: code = NotFound desc = could not find container \"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d\": container with ID starting with c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d not found: ID does not exist" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.155414 4183 scope.go:117] "RemoveContainer" containerID="75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b" Aug 13 20:07:42 crc kubenswrapper[4183]: E0813 20:07:42.162089 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b\": container with ID starting with 75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b not found: ID does not exist" containerID="75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.162170 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b"} err="failed to get container status \"75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b\": rpc error: code = NotFound desc = could not find container \"75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b\": container with ID starting with 75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b not found: ID does not exist" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.363078 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pmqwc" podStartSLOduration=4.845765531 podStartE2EDuration="1m6.363011681s" podCreationTimestamp="2025-08-13 20:06:36 +0000 UTC" firstStartedPulling="2025-08-13 20:06:38.788419425 +0000 UTC m=+1365.481084033" lastFinishedPulling="2025-08-13 20:07:40.305665565 +0000 UTC m=+1426.998330183" observedRunningTime="2025-08-13 20:07:42.355966279 +0000 UTC m=+1429.048631407" watchObservedRunningTime="2025-08-13 20:07:42.363011681 +0000 UTC m=+1429.055676399" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.473599 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8518239d-8dab-48ac-a3c1-e775566b9bff" (UID: "8518239d-8dab-48ac-a3c1-e775566b9bff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.527765 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.615264 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.643988 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:07:43 crc kubenswrapper[4183]: I0813 20:07:43.217590 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" path="/var/lib/kubelet/pods/8518239d-8dab-48ac-a3c1-e775566b9bff/volumes" Aug 13 20:07:45 crc kubenswrapper[4183]: I0813 20:07:45.212168 4183 scope.go:117] "RemoveContainer" containerID="200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44" Aug 13 20:07:45 crc kubenswrapper[4183]: E0813 20:07:45.212932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-7d46d5bb6d-rrg6t_openshift-ingress-operator(7d51f445-054a-4e4f-a67b-a828f5a32511)\"" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 20:07:47 crc kubenswrapper[4183]: I0813 20:07:47.152606 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:07:47 crc kubenswrapper[4183]: I0813 20:07:47.153146 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:07:48 crc kubenswrapper[4183]: I0813 20:07:48.274609 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pmqwc" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="registry-server" probeResult="failure" output=< Aug 13 20:07:48 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:07:48 crc kubenswrapper[4183]: > Aug 13 20:07:54 crc kubenswrapper[4183]: I0813 20:07:54.746623 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:07:54 crc kubenswrapper[4183]: I0813 20:07:54.747374 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:07:54 crc kubenswrapper[4183]: I0813 20:07:54.747426 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:07:54 crc kubenswrapper[4183]: I0813 20:07:54.747463 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:07:54 crc kubenswrapper[4183]: I0813 20:07:54.747494 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.327978 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.333721 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.336866 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler" containerID="cri-o://5b04274f5ebeb54ec142f28db67158b3f20014bf0046505512a20f576eb7c4b4" gracePeriod=30 Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.337094 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-recovery-controller" containerID="cri-o://da6e49e577c89776d78e03c12b1aa711de8c3b6ceb252a9c05b51d38a6e6fd8a" gracePeriod=30 Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.337181 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-cert-syncer" containerID="cri-o://daf74224d04a5859b6f3ea7213d84dd41f91a9dfefadc077c041aabcb8247fdd" gracePeriod=30 Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346086 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346238 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6a57a7fb1944b43a6bd11a349520d301" podNamespace="openshift-kube-scheduler" podName="openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346406 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="wait-for-host-port" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346436 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="wait-for-host-port" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346453 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="registry-server" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346461 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="registry-server" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346471 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="extract-utilities" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346479 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="extract-utilities" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346492 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" containerName="installer" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346498 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" containerName="installer" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346511 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-recovery-controller" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346519 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-recovery-controller" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346529 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346535 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346547 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="extract-content" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346554 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="extract-content" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346565 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-cert-syncer" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346574 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-cert-syncer" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346714 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-cert-syncer" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346729 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" containerName="installer" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346740 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346756 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-recovery-controller" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346765 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="registry-server" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.447443 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.447855 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.548995 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.549096 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.549212 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.549286 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.582463 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.602443 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_92b2a8634cfe8a21cffcc98cc8c87160/kube-scheduler-cert-syncer/0.log" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.604392 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.624543 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="92b2a8634cfe8a21cffcc98cc8c87160" podUID="6a57a7fb1944b43a6bd11a349520d301" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.664649 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.751139 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir\") pod \"92b2a8634cfe8a21cffcc98cc8c87160\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.751244 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir\") pod \"92b2a8634cfe8a21cffcc98cc8c87160\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.751279 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "92b2a8634cfe8a21cffcc98cc8c87160" (UID: "92b2a8634cfe8a21cffcc98cc8c87160"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.751451 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "92b2a8634cfe8a21cffcc98cc8c87160" (UID: "92b2a8634cfe8a21cffcc98cc8c87160"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.751558 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.853326 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.090766 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_92b2a8634cfe8a21cffcc98cc8c87160/kube-scheduler-cert-syncer/0.log" Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.094243 4183 generic.go:334] "Generic (PLEG): container finished" podID="92b2a8634cfe8a21cffcc98cc8c87160" containerID="da6e49e577c89776d78e03c12b1aa711de8c3b6ceb252a9c05b51d38a6e6fd8a" exitCode=0 Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.094309 4183 generic.go:334] "Generic (PLEG): container finished" podID="92b2a8634cfe8a21cffcc98cc8c87160" containerID="daf74224d04a5859b6f3ea7213d84dd41f91a9dfefadc077c041aabcb8247fdd" exitCode=2 Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.094315 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.094332 4183 generic.go:334] "Generic (PLEG): container finished" podID="92b2a8634cfe8a21cffcc98cc8c87160" containerID="5b04274f5ebeb54ec142f28db67158b3f20014bf0046505512a20f576eb7c4b4" exitCode=0 Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.094538 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3aeac3b3f0abd9616c32591e8c03ee04ad93d9eaa1a57f5f009d1e5534dc9bf" Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.099010 4183 generic.go:334] "Generic (PLEG): container finished" podID="aca1f9ff-a685-4a78-b461-3931b757f754" containerID="f4f5bb6e58084ee7338acaefbb6a6dac0e4bc0801ff33d60707cf12512275cd2" exitCode=0 Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.099494 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-crc" event={"ID":"aca1f9ff-a685-4a78-b461-3931b757f754","Type":"ContainerDied","Data":"f4f5bb6e58084ee7338acaefbb6a6dac0e4bc0801ff33d60707cf12512275cd2"} Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.100631 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="92b2a8634cfe8a21cffcc98cc8c87160" podUID="6a57a7fb1944b43a6bd11a349520d301" Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.152190 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="92b2a8634cfe8a21cffcc98cc8c87160" podUID="6a57a7fb1944b43a6bd11a349520d301" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.105101 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pmqwc" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="registry-server" containerID="cri-o://18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012" gracePeriod=2 Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.209677 4183 scope.go:117] "RemoveContainer" containerID="200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.221052 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92b2a8634cfe8a21cffcc98cc8c87160" path="/var/lib/kubelet/pods/92b2a8634cfe8a21cffcc98cc8c87160/volumes" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.553184 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.676586 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.680046 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock\") pod \"aca1f9ff-a685-4a78-b461-3931b757f754\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.680156 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access\") pod \"aca1f9ff-a685-4a78-b461-3931b757f754\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.680224 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir\") pod \"aca1f9ff-a685-4a78-b461-3931b757f754\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.680443 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "aca1f9ff-a685-4a78-b461-3931b757f754" (UID: "aca1f9ff-a685-4a78-b461-3931b757f754"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.680477 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock" (OuterVolumeSpecName: "var-lock") pod "aca1f9ff-a685-4a78-b461-3931b757f754" (UID: "aca1f9ff-a685-4a78-b461-3931b757f754"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.689991 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "aca1f9ff-a685-4a78-b461-3931b757f754" (UID: "aca1f9ff-a685-4a78-b461-3931b757f754"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.781577 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities\") pod \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.781662 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4g78\" (UniqueName: \"kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78\") pod \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.781847 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content\") pod \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.782093 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.782114 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.782133 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.782925 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities" (OuterVolumeSpecName: "utilities") pod "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" (UID: "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.789589 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78" (OuterVolumeSpecName: "kube-api-access-h4g78") pod "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" (UID: "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed"). InnerVolumeSpecName "kube-api-access-h4g78". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.883253 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.883325 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h4g78\" (UniqueName: \"kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.114082 4183 generic.go:334] "Generic (PLEG): container finished" podID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerID="18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012" exitCode=0 Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.115157 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.115204 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerDied","Data":"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012"} Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.116555 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerDied","Data":"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8"} Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.116586 4183 scope.go:117] "RemoveContainer" containerID="18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.126548 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.126932 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-crc" event={"ID":"aca1f9ff-a685-4a78-b461-3931b757f754","Type":"ContainerDied","Data":"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056"} Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.126988 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.130167 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.log" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.130727 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"2be75d1e514468ff600570e8a9d6f13a97a775a4d62bca4f69b639c8be59cf64"} Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.207987 4183 scope.go:117] "RemoveContainer" containerID="89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.295514 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.320057 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.320538 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bd6a3a59e513625ca0ae3724df2686bc" podNamespace="openshift-kube-controller-manager" podName="kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.320963 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="extract-content" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321206 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="extract-content" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321231 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="56d9256d8ee968b89d58cda59af60969" containerName="cluster-policy-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321239 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9256d8ee968b89d58cda59af60969" containerName="cluster-policy-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321300 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-cert-syncer" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321309 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-cert-syncer" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321319 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="aca1f9ff-a685-4a78-b461-3931b757f754" containerName="installer" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321327 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca1f9ff-a685-4a78-b461-3931b757f754" containerName="installer" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321342 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="extract-utilities" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321349 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="extract-utilities" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321360 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-recovery-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321367 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-recovery-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321379 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="registry-server" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321385 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="registry-server" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321395 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321405 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321518 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9256d8ee968b89d58cda59af60969" containerName="cluster-policy-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321530 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321543 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="registry-server" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321554 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-recovery-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321564 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca1f9ff-a685-4a78-b461-3931b757f754" containerName="installer" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321575 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-cert-syncer" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.326298 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager" containerID="cri-o://4159ba877f8ff7e1e08f72bf3d12699149238f2597dfea0b4882ee6797fe2c98" gracePeriod=30 Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.326705 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://844a16e08b8b6f6647fb07d6bae6657e732727da7ada45f1211b70ff85887202" gracePeriod=30 Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.326757 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://be1e0c86831f89f585cd2c81563266389f6b99fe3a2b00e25563c193b7ae2289" gracePeriod=30 Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.326866 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="cluster-policy-controller" containerID="cri-o://6fac670aec99a6e895db54957107db545029859582d9e7bfff8bcb8b8323317b" gracePeriod=30 Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.395709 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.395815 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.497307 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.497385 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.497494 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.497539 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.549594 4183 scope.go:117] "RemoveContainer" containerID="29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.673146 4183 scope.go:117] "RemoveContainer" containerID="18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.674149 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012\": container with ID starting with 18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012 not found: ID does not exist" containerID="18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.674212 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012"} err="failed to get container status \"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012\": rpc error: code = NotFound desc = could not find container \"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012\": container with ID starting with 18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012 not found: ID does not exist" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.674225 4183 scope.go:117] "RemoveContainer" containerID="89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.677462 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1\": container with ID starting with 89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1 not found: ID does not exist" containerID="89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.677521 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1"} err="failed to get container status \"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1\": rpc error: code = NotFound desc = could not find container \"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1\": container with ID starting with 89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1 not found: ID does not exist" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.677535 4183 scope.go:117] "RemoveContainer" containerID="29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.678622 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d\": container with ID starting with 29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d not found: ID does not exist" containerID="29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.678687 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d"} err="failed to get container status \"29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d\": rpc error: code = NotFound desc = could not find container \"29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d\": container with ID starting with 29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d not found: ID does not exist" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.718601 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.718702 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.718973 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" start-of-body= Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.719119 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.737956 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_56d9256d8ee968b89d58cda59af60969/kube-controller-manager-cert-syncer/0.log" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.740496 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.749570 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="56d9256d8ee968b89d58cda59af60969" podUID="bd6a3a59e513625ca0ae3724df2686bc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.801739 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir\") pod \"56d9256d8ee968b89d58cda59af60969\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.801960 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir\") pod \"56d9256d8ee968b89d58cda59af60969\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.802251 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "56d9256d8ee968b89d58cda59af60969" (UID: "56d9256d8ee968b89d58cda59af60969"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.802286 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "56d9256d8ee968b89d58cda59af60969" (UID: "56d9256d8ee968b89d58cda59af60969"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.814840 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" (UID: "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.903427 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.903510 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.903528 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.072465 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.084490 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.142231 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_56d9256d8ee968b89d58cda59af60969/kube-controller-manager-cert-syncer/0.log" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144623 4183 generic.go:334] "Generic (PLEG): container finished" podID="56d9256d8ee968b89d58cda59af60969" containerID="844a16e08b8b6f6647fb07d6bae6657e732727da7ada45f1211b70ff85887202" exitCode=0 Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144689 4183 generic.go:334] "Generic (PLEG): container finished" podID="56d9256d8ee968b89d58cda59af60969" containerID="be1e0c86831f89f585cd2c81563266389f6b99fe3a2b00e25563c193b7ae2289" exitCode=2 Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144712 4183 generic.go:334] "Generic (PLEG): container finished" podID="56d9256d8ee968b89d58cda59af60969" containerID="6fac670aec99a6e895db54957107db545029859582d9e7bfff8bcb8b8323317b" exitCode=0 Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144729 4183 generic.go:334] "Generic (PLEG): container finished" podID="56d9256d8ee968b89d58cda59af60969" containerID="4159ba877f8ff7e1e08f72bf3d12699149238f2597dfea0b4882ee6797fe2c98" exitCode=0 Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144739 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144967 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a386295a4836609efa126cdad0f8da6cec9163b751ff142e15d9693c89cf9866" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.149350 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="56d9256d8ee968b89d58cda59af60969" podUID="bd6a3a59e513625ca0ae3724df2686bc" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.150471 4183 generic.go:334] "Generic (PLEG): container finished" podID="a45bfab9-f78b-4d72-b5b7-903e60401124" containerID="0028ed1d2f2b6b7f754d78a66fe28befb02bf632d29bbafaf101bd5630ca0ce6" exitCode=0 Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.150531 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-11-crc" event={"ID":"a45bfab9-f78b-4d72-b5b7-903e60401124","Type":"ContainerDied","Data":"0028ed1d2f2b6b7f754d78a66fe28befb02bf632d29bbafaf101bd5630ca0ce6"} Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.272296 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="56d9256d8ee968b89d58cda59af60969" podUID="bd6a3a59e513625ca0ae3724df2686bc" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.307600 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" path="/var/lib/kubelet/pods/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed/volumes" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.308471 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d9256d8ee968b89d58cda59af60969" path="/var/lib/kubelet/pods/56d9256d8ee968b89d58cda59af60969/volumes" Aug 13 20:08:01 crc kubenswrapper[4183]: E0813 20:08:01.370919 4183 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56d9256d8ee968b89d58cda59af60969.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56d9256d8ee968b89d58cda59af60969.slice/crio-a386295a4836609efa126cdad0f8da6cec9163b751ff142e15d9693c89cf9866\": RecentStats: unable to find data in memory cache]" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.701939 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.726456 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock\") pod \"a45bfab9-f78b-4d72-b5b7-903e60401124\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.726566 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir\") pod \"a45bfab9-f78b-4d72-b5b7-903e60401124\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.726656 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access\") pod \"a45bfab9-f78b-4d72-b5b7-903e60401124\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.726837 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock" (OuterVolumeSpecName: "var-lock") pod "a45bfab9-f78b-4d72-b5b7-903e60401124" (UID: "a45bfab9-f78b-4d72-b5b7-903e60401124"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.726907 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a45bfab9-f78b-4d72-b5b7-903e60401124" (UID: "a45bfab9-f78b-4d72-b5b7-903e60401124"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.727044 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.727061 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.737672 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a45bfab9-f78b-4d72-b5b7-903e60401124" (UID: "a45bfab9-f78b-4d72-b5b7-903e60401124"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.828096 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:03 crc kubenswrapper[4183]: I0813 20:08:03.164692 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-11-crc" event={"ID":"a45bfab9-f78b-4d72-b5b7-903e60401124","Type":"ContainerDied","Data":"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31"} Aug 13 20:08:03 crc kubenswrapper[4183]: I0813 20:08:03.164755 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31" Aug 13 20:08:03 crc kubenswrapper[4183]: I0813 20:08:03.164921 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.210374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.233240 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="1f93bc40-081c-4dbc-905a-acda15a1c6ce" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.233318 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="1f93bc40-081c-4dbc-905a-acda15a1c6ce" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.254392 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.259540 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.267557 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.285068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.294482 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:08:09 crc kubenswrapper[4183]: I0813 20:08:09.207101 4183 generic.go:334] "Generic (PLEG): container finished" podID="6a57a7fb1944b43a6bd11a349520d301" containerID="ecc1c7aa8cb60b63c1dc3d6b8b1d65f58dad0f51d174f6d245650a3c918170f3" exitCode=0 Aug 13 20:08:09 crc kubenswrapper[4183]: I0813 20:08:09.207402 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerDied","Data":"ecc1c7aa8cb60b63c1dc3d6b8b1d65f58dad0f51d174f6d245650a3c918170f3"} Aug 13 20:08:09 crc kubenswrapper[4183]: I0813 20:08:09.207460 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"7d38e4405721e751ffe695369180693433405ae4331549aed5834d79ed44b3ee"} Aug 13 20:08:10 crc kubenswrapper[4183]: I0813 20:08:10.242468 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"f484dd54fa6f1d9458704164d3b0d07e7de45fc1c5c3732080db88204b97a260"} Aug 13 20:08:10 crc kubenswrapper[4183]: I0813 20:08:10.242541 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"321449b7baef718aa4f8e6a5e8027626824e675a08ec111132c5033a8de2bea4"} Aug 13 20:08:11 crc kubenswrapper[4183]: I0813 20:08:11.251534 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"748707f199ebf717d7b583f31dd21339f68d06a1f3fe2bd66ad8cd355863d0b6"} Aug 13 20:08:11 crc kubenswrapper[4183]: I0813 20:08:11.252067 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.208554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.230189 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="953c24d8-ecc7-443c-a9ae-a3caf95e5e63" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.230240 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="953c24d8-ecc7-443c-a9ae-a3caf95e5e63" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.257216 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=4.2571685630000005 podStartE2EDuration="4.257168563s" podCreationTimestamp="2025-08-13 20:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:08:11.277452103 +0000 UTC m=+1457.970116921" watchObservedRunningTime="2025-08-13 20:08:12.257168563 +0000 UTC m=+1458.949833291" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.259925 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.268844 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.272823 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.292493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.302328 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:08:13 crc kubenswrapper[4183]: I0813 20:08:13.288033 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"0be6c231766bb308c5fd1c35f7d778e9085ef87b609e771c9b8c0562273f73af"} Aug 13 20:08:13 crc kubenswrapper[4183]: I0813 20:08:13.288425 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"2a5d2c4f8091434e96a501a9652a7fc6eabd91a48a80b63a8e598b375d046dcf"} Aug 13 20:08:13 crc kubenswrapper[4183]: I0813 20:08:13.288449 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"134690fa1c76729c58b7776be3ce993405e907d37bcd9895349f1550b9cb7b4e"} Aug 13 20:08:14 crc kubenswrapper[4183]: I0813 20:08:14.298722 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"b3f81ba7d134155fdc498a60346928d213e2da7a3f20f0b50f64409568a246cc"} Aug 13 20:08:14 crc kubenswrapper[4183]: I0813 20:08:14.298848 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"dd5de1da9d2aa603827fd445dd57c562cf58ea00258cc5b64a324701843c502b"} Aug 13 20:08:14 crc kubenswrapper[4183]: I0813 20:08:14.346705 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.34665693 podStartE2EDuration="2.34665693s" podCreationTimestamp="2025-08-13 20:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:08:14.341638536 +0000 UTC m=+1461.034303354" watchObservedRunningTime="2025-08-13 20:08:14.34665693 +0000 UTC m=+1461.039321658" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.293526 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.294368 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.298199 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.298330 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.299395 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.301153 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.369525 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.361444 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.769578 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.769759 4183 topology_manager.go:215] "Topology Admit Handler" podUID="7f47300841026200cf071984642de38e" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.770065 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a45bfab9-f78b-4d72-b5b7-903e60401124" containerName="installer" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.770092 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45bfab9-f78b-4d72-b5b7-903e60401124" containerName="installer" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.770233 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="a45bfab9-f78b-4d72-b5b7-903e60401124" containerName="installer" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.770659 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.770874 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.771150 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver" containerID="cri-o://cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12" gracePeriod=15 Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.771208 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-syncer" containerID="cri-o://bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343" gracePeriod=15 Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.771215 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9" gracePeriod=15 Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.771239 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83" gracePeriod=15 Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.771375 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9" gracePeriod=15 Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772366 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772453 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ae85115fdc231b4002b57317b41a6400" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772611 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-check-endpoints" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772625 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-check-endpoints" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772647 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772655 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772665 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="setup" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772674 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="setup" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772684 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-syncer" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772692 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-syncer" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772704 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-insecure-readyz" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772712 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-insecure-readyz" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772721 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772728 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772885 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772925 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-insecure-readyz" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772939 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772952 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-check-endpoints" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772961 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-syncer" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.852631 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.852745 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.852875 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.852946 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.852979 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.853006 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.853028 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.853139 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.878338 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.954727 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.954844 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.954931 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.954966 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.954988 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955017 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955063 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955089 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955161 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955272 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955281 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955310 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955310 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955338 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955346 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955367 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.174115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:24 crc kubenswrapper[4183]: E0813 20:08:24.241628 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.185b6c6f19d3379d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:7f47300841026200cf071984642de38e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,LastTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.372432 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"7f47300841026200cf071984642de38e","Type":"ContainerStarted","Data":"887b3913b57be6cd6694b563992e615df63b28b24f279e51986fb9dfc689f5d5"} Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.390453 4183 generic.go:334] "Generic (PLEG): container finished" podID="3557248c-8f70-4165-aa66-8df983e7e01a" containerID="6b580ba621276e10a232c15451ffaeddf32ec7044f6dad05aaf5e3b8fd52877a" exitCode=0 Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.390594 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"3557248c-8f70-4165-aa66-8df983e7e01a","Type":"ContainerDied","Data":"6b580ba621276e10a232c15451ffaeddf32ec7044f6dad05aaf5e3b8fd52877a"} Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.395765 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.397652 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.399281 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.414309 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_48128e8d38b5cbcd2691da698bd9cac3/kube-apiserver-cert-syncer/0.log" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.416055 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9" exitCode=0 Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.416100 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83" exitCode=0 Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.416115 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9" exitCode=0 Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.416127 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343" exitCode=2 Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.214399 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.216001 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.217007 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.440382 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.442184 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.436735 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"7f47300841026200cf071984642de38e","Type":"ContainerStarted","Data":"92928a395bcb4b479dc083922bbe86ac38b51d98cd589eedcbc4c18744b69d89"} Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.886490 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.888411 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.889866 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.995965 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access\") pod \"3557248c-8f70-4165-aa66-8df983e7e01a\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.996063 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock\") pod \"3557248c-8f70-4165-aa66-8df983e7e01a\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.996135 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir\") pod \"3557248c-8f70-4165-aa66-8df983e7e01a\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.996285 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock" (OuterVolumeSpecName: "var-lock") pod "3557248c-8f70-4165-aa66-8df983e7e01a" (UID: "3557248c-8f70-4165-aa66-8df983e7e01a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.996363 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3557248c-8f70-4165-aa66-8df983e7e01a" (UID: "3557248c-8f70-4165-aa66-8df983e7e01a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.005385 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3557248c-8f70-4165-aa66-8df983e7e01a" (UID: "3557248c-8f70-4165-aa66-8df983e7e01a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.097962 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.098312 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.098332 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.174745 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.178136 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.181246 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.182057 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.183114 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.183129 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.445472 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.445476 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"3557248c-8f70-4165-aa66-8df983e7e01a","Type":"ContainerDied","Data":"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309"} Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.445574 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.449279 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.451519 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.478514 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.479931 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.858069 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_48128e8d38b5cbcd2691da698bd9cac3/kube-apiserver-cert-syncer/0.log" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.859873 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.862061 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.863006 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.863981 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.920653 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir\") pod \"48128e8d38b5cbcd2691da698bd9cac3\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.920747 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir\") pod \"48128e8d38b5cbcd2691da698bd9cac3\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.920915 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "48128e8d38b5cbcd2691da698bd9cac3" (UID: "48128e8d38b5cbcd2691da698bd9cac3"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.920952 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir\") pod \"48128e8d38b5cbcd2691da698bd9cac3\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.920982 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "48128e8d38b5cbcd2691da698bd9cac3" (UID: "48128e8d38b5cbcd2691da698bd9cac3"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.921140 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "48128e8d38b5cbcd2691da698bd9cac3" (UID: "48128e8d38b5cbcd2691da698bd9cac3"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.921497 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.921532 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.921543 4183 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.218998 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48128e8d38b5cbcd2691da698bd9cac3" path="/var/lib/kubelet/pods/48128e8d38b5cbcd2691da698bd9cac3/volumes" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.458319 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_48128e8d38b5cbcd2691da698bd9cac3/kube-apiserver-cert-syncer/0.log" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.459534 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12" exitCode=0 Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.459608 4183 scope.go:117] "RemoveContainer" containerID="6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.459755 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.462241 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.464065 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.466914 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.468362 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.470527 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.471441 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.513125 4183 scope.go:117] "RemoveContainer" containerID="8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.624083 4183 scope.go:117] "RemoveContainer" containerID="955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.690658 4183 scope.go:117] "RemoveContainer" containerID="bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.727822 4183 scope.go:117] "RemoveContainer" containerID="cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.785051 4183 scope.go:117] "RemoveContainer" containerID="c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.863453 4183 scope.go:117] "RemoveContainer" containerID="6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.864654 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9\": container with ID starting with 6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9 not found: ID does not exist" containerID="6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.864760 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9"} err="failed to get container status \"6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9\": rpc error: code = NotFound desc = could not find container \"6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9\": container with ID starting with 6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9 not found: ID does not exist" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.864855 4183 scope.go:117] "RemoveContainer" containerID="8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.865988 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83\": container with ID starting with 8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83 not found: ID does not exist" containerID="8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.866096 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83"} err="failed to get container status \"8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83\": rpc error: code = NotFound desc = could not find container \"8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83\": container with ID starting with 8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83 not found: ID does not exist" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.866111 4183 scope.go:117] "RemoveContainer" containerID="955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.866831 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9\": container with ID starting with 955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9 not found: ID does not exist" containerID="955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.866880 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9"} err="failed to get container status \"955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9\": rpc error: code = NotFound desc = could not find container \"955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9\": container with ID starting with 955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9 not found: ID does not exist" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.866925 4183 scope.go:117] "RemoveContainer" containerID="bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.868091 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343\": container with ID starting with bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343 not found: ID does not exist" containerID="bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.868222 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343"} err="failed to get container status \"bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343\": rpc error: code = NotFound desc = could not find container \"bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343\": container with ID starting with bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343 not found: ID does not exist" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.868252 4183 scope.go:117] "RemoveContainer" containerID="cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.869097 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12\": container with ID starting with cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12 not found: ID does not exist" containerID="cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.869152 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12"} err="failed to get container status \"cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12\": rpc error: code = NotFound desc = could not find container \"cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12\": container with ID starting with cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12 not found: ID does not exist" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.869166 4183 scope.go:117] "RemoveContainer" containerID="c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.870079 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba\": container with ID starting with c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba not found: ID does not exist" containerID="c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.870130 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba"} err="failed to get container status \"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba\": rpc error: code = NotFound desc = could not find container \"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba\": container with ID starting with c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba not found: ID does not exist" Aug 13 20:08:28 crc kubenswrapper[4183]: E0813 20:08:28.434605 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.185b6c6f19d3379d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:7f47300841026200cf071984642de38e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,LastTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.410013 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.412321 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.413478 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.414387 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.415398 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:32 crc kubenswrapper[4183]: I0813 20:08:32.422569 4183 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.424377 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="200ms" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.626301 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="400ms" Aug 13 20:08:33 crc kubenswrapper[4183]: E0813 20:08:33.028474 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="800ms" Aug 13 20:08:33 crc kubenswrapper[4183]: E0813 20:08:33.830041 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="1.6s" Aug 13 20:08:35 crc kubenswrapper[4183]: I0813 20:08:35.213617 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:35 crc kubenswrapper[4183]: I0813 20:08:35.215381 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:35 crc kubenswrapper[4183]: E0813 20:08:35.431177 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="3.2s" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.521459 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.523202 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.524232 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.525871 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.526512 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.526527 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.211765 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.212614 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.231367 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.231761 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:37 crc kubenswrapper[4183]: E0813 20:08:37.233020 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.233654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.538540 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"302d89cfbab2c80a69d727fd8c30e727ff36453533105813906fa746343277a0"} Aug 13 20:08:38 crc kubenswrapper[4183]: E0813 20:08:38.437606 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.185b6c6f19d3379d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:7f47300841026200cf071984642de38e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,LastTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.546455 4183 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0" exitCode=0 Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.546519 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerDied","Data":"05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0"} Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.546956 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.546972 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:38 crc kubenswrapper[4183]: E0813 20:08:38.548383 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.551440 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.553221 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.554631 4183 status_manager.go:853] "Failed to get status for pod" podUID="ae85115fdc231b4002b57317b41a6400" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:38 crc kubenswrapper[4183]: E0813 20:08:38.633940 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="6.4s" Aug 13 20:08:39 crc kubenswrapper[4183]: I0813 20:08:39.559148 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282"} Aug 13 20:08:39 crc kubenswrapper[4183]: I0813 20:08:39.559214 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807"} Aug 13 20:08:40 crc kubenswrapper[4183]: I0813 20:08:40.599184 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3"} Aug 13 20:08:40 crc kubenswrapper[4183]: I0813 20:08:40.599535 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078"} Aug 13 20:08:41 crc kubenswrapper[4183]: I0813 20:08:41.611076 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333"} Aug 13 20:08:41 crc kubenswrapper[4183]: I0813 20:08:41.611749 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:41 crc kubenswrapper[4183]: I0813 20:08:41.611849 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:41 crc kubenswrapper[4183]: I0813 20:08:41.612213 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:42 crc kubenswrapper[4183]: I0813 20:08:42.234267 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:42 crc kubenswrapper[4183]: I0813 20:08:42.234736 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:42 crc kubenswrapper[4183]: I0813 20:08:42.342162 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Aug 13 20:08:42 crc kubenswrapper[4183]: I0813 20:08:42.342428 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.273716 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.471929 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.525141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53c20181-da08-4c94-91d7-6f71a843fa75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T20:08:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T20:08:38Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T20:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T20:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T20:08:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T20:08:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T20:08:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T20:08:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T20:08:40Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:08:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T20:08:37Z\\\"}}}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"53c20181-da08-4c94-91d7-6f71a843fa75\": field is immutable" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.593733 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="ae85115fdc231b4002b57317b41a6400" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.653927 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.653970 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.665200 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.671109 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="ae85115fdc231b4002b57317b41a6400" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Aug 13 20:08:48 crc kubenswrapper[4183]: I0813 20:08:48.660687 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:48 crc kubenswrapper[4183]: I0813 20:08:48.660738 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.748075 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.748960 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" status="Running" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.748992 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.749206 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.749313 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.749414 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:08:55 crc kubenswrapper[4183]: I0813 20:08:55.227202 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="ae85115fdc231b4002b57317b41a6400" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Aug 13 20:08:57 crc kubenswrapper[4183]: I0813 20:08:57.627330 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Aug 13 20:08:57 crc kubenswrapper[4183]: I0813 20:08:57.631933 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Aug 13 20:08:57 crc kubenswrapper[4183]: I0813 20:08:57.982066 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Aug 13 20:08:58 crc kubenswrapper[4183]: I0813 20:08:58.147301 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Aug 13 20:08:58 crc kubenswrapper[4183]: I0813 20:08:58.293535 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:08:58 crc kubenswrapper[4183]: I0813 20:08:58.296700 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Aug 13 20:08:58 crc kubenswrapper[4183]: I0813 20:08:58.461026 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Aug 13 20:08:58 crc kubenswrapper[4183]: I0813 20:08:58.601848 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.117265 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.177676 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.254728 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.262980 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.335459 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.630933 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.789658 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.845263 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.903631 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.057338 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.074697 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.110668 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.303377 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.360247 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.464834 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.489071 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.607957 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.720412 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.780720 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.784394 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.795747 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.862674 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.940179 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.956659 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.085377 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-79vsd" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.178096 4183 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.328063 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.447104 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.476288 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.547427 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.641589 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.665206 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.676310 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.681567 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.692079 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.769757 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.785259 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.957170 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.977180 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.081278 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.096022 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.099320 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.378915 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.386933 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.493464 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.498007 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.511713 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.686008 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.695292 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.961043 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.031525 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.102611 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.110397 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.141717 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.320726 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.446960 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.478887 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.509574 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.607414 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.648203 4183 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.774962 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.947576 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.993438 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.998076 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.033861 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.037003 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.042158 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.068241 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.081452 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.101661 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.189515 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.265058 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.324465 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.326161 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.543695 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.547105 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.572449 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.598540 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.654289 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.672610 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.717240 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.822302 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.968089 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.057616 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.199184 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.244267 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.296634 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.313920 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.472644 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.481972 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.506429 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.556529 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.669561 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.695473 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.866327 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.914427 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.977991 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.000600 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.010262 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.018669 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.055596 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.095466 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.112337 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.114240 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.126649 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.308156 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.309407 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.369216 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.518110 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.585833 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.595313 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.778450 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.831825 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.850352 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.962435 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.157179 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.180116 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.221351 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.250856 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.257683 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.279858 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.280641 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.301944 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.371653 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.376765 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.558063 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.609699 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.620979 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.644389 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.671435 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.696221 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-ng44q" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.869656 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.871617 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.884152 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.902953 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.098194 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.125093 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.177401 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.363241 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.532440 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.672480 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.699313 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.700878 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.705558 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.782818 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.783315 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.858137 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.868186 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.999092 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.148008 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.199442 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.265032 4183 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.405863 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.430381 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.460881 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.505573 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.664845 4183 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.780304 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.924032 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.937226 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.072708 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.134052 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.164281 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.227498 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.276419 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.288036 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.370724 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.456064 4183 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.457612 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.458203 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=47.458141811 podStartE2EDuration="47.458141811s" podCreationTimestamp="2025-08-13 20:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:08:47.588553361 +0000 UTC m=+1494.281218409" watchObservedRunningTime="2025-08-13 20:09:10.458141811 +0000 UTC m=+1517.150806510" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.462790 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.462937 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.481349 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.495878 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.498050 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.506394 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.516937 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.516769112 podStartE2EDuration="23.516769112s" podCreationTimestamp="2025-08-13 20:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:09:10.508199597 +0000 UTC m=+1517.200864395" watchObservedRunningTime="2025-08-13 20:09:10.516769112 +0000 UTC m=+1517.209433890" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.610135 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.712759 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.743313 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.840994 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.942279 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.032092 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.093276 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.243481 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.289761 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.342288 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.384979 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.572094 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.624107 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.101727 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.141251 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.263078 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.362504 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.444336 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.801094 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.813525 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6sd5l" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.016540 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.393057 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.499447 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.526685 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.600389 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.632243 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.857723 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.992095 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Aug 13 20:09:21 crc kubenswrapper[4183]: I0813 20:09:21.399619 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:09:21 crc kubenswrapper[4183]: I0813 20:09:21.401000 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="7f47300841026200cf071984642de38e" containerName="startup-monitor" containerID="cri-o://92928a395bcb4b479dc083922bbe86ac38b51d98cd589eedcbc4c18744b69d89" gracePeriod=5 Aug 13 20:09:26 crc kubenswrapper[4183]: I0813 20:09:26.975279 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7f47300841026200cf071984642de38e/startup-monitor/0.log" Aug 13 20:09:26 crc kubenswrapper[4183]: I0813 20:09:26.975935 4183 generic.go:334] "Generic (PLEG): container finished" podID="7f47300841026200cf071984642de38e" containerID="92928a395bcb4b479dc083922bbe86ac38b51d98cd589eedcbc4c18744b69d89" exitCode=137 Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.058440 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7f47300841026200cf071984642de38e/startup-monitor/0.log" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.058580 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170217 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir\") pod \"7f47300841026200cf071984642de38e\" (UID: \"7f47300841026200cf071984642de38e\") " Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170309 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log\") pod \"7f47300841026200cf071984642de38e\" (UID: \"7f47300841026200cf071984642de38e\") " Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170448 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir\") pod \"7f47300841026200cf071984642de38e\" (UID: \"7f47300841026200cf071984642de38e\") " Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170487 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock\") pod \"7f47300841026200cf071984642de38e\" (UID: \"7f47300841026200cf071984642de38e\") " Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170552 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests\") pod \"7f47300841026200cf071984642de38e\" (UID: \"7f47300841026200cf071984642de38e\") " Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170629 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log" (OuterVolumeSpecName: "var-log") pod "7f47300841026200cf071984642de38e" (UID: "7f47300841026200cf071984642de38e"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170679 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "7f47300841026200cf071984642de38e" (UID: "7f47300841026200cf071984642de38e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170706 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock" (OuterVolumeSpecName: "var-lock") pod "7f47300841026200cf071984642de38e" (UID: "7f47300841026200cf071984642de38e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170749 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests" (OuterVolumeSpecName: "manifests") pod "7f47300841026200cf071984642de38e" (UID: "7f47300841026200cf071984642de38e"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170949 4183 reconciler_common.go:300] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests\") on node \"crc\" DevicePath \"\"" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170975 4183 reconciler_common.go:300] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log\") on node \"crc\" DevicePath \"\"" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170991 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.171005 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.181996 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "7f47300841026200cf071984642de38e" (UID: "7f47300841026200cf071984642de38e"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.218138 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f47300841026200cf071984642de38e" path="/var/lib/kubelet/pods/7f47300841026200cf071984642de38e/volumes" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.218546 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.272738 4183 reconciler_common.go:300] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.289033 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.289098 4183 kubelet.go:2639] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0724fd71-838e-4f2e-b139-bb1fd482d17e" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.293089 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.293166 4183 kubelet.go:2663] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0724fd71-838e-4f2e-b139-bb1fd482d17e" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.984729 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7f47300841026200cf071984642de38e/startup-monitor/0.log" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.984982 4183 scope.go:117] "RemoveContainer" containerID="92928a395bcb4b479dc083922bbe86ac38b51d98cd589eedcbc4c18744b69d89" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.985206 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:09:34 crc kubenswrapper[4183]: I0813 20:09:34.861454 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Aug 13 20:09:42 crc kubenswrapper[4183]: I0813 20:09:42.336888 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 20:09:54 crc kubenswrapper[4183]: I0813 20:09:54.750946 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:09:54 crc kubenswrapper[4183]: I0813 20:09:54.751742 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:09:54 crc kubenswrapper[4183]: I0813 20:09:54.751858 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:09:54 crc kubenswrapper[4183]: I0813 20:09:54.751927 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:09:54 crc kubenswrapper[4183]: I0813 20:09:54.751981 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:09:55 crc kubenswrapper[4183]: I0813 20:09:55.597745 4183 scope.go:117] "RemoveContainer" containerID="dc3b34e8b871f3bd864f0c456c6ee0a0f7a97f171f4c0c5d20a5a451b26196e9" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.277768 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jx5m8"] Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.278765 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" podNamespace="openshift-multus" podName="cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: E0813 20:10:15.279955 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" containerName="installer" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.279984 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" containerName="installer" Aug 13 20:10:15 crc kubenswrapper[4183]: E0813 20:10:15.280009 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="7f47300841026200cf071984642de38e" containerName="startup-monitor" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.280021 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f47300841026200cf071984642de38e" containerName="startup-monitor" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.280316 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" containerName="installer" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.280345 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f47300841026200cf071984642de38e" containerName="startup-monitor" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.283142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.289029 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.289532 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-smth4" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.378578 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25pz9\" (UniqueName: \"kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.379062 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.379570 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.380575 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.481719 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.481975 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-25pz9\" (UniqueName: \"kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.482381 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.482417 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.482748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.483053 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.483370 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.525627 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-25pz9\" (UniqueName: \"kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.609972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:16 crc kubenswrapper[4183]: I0813 20:10:16.323726 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" event={"ID":"b78e72e3-8ece-4d66-aa9c-25445bacdc99","Type":"ContainerStarted","Data":"e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646"} Aug 13 20:10:16 crc kubenswrapper[4183]: I0813 20:10:16.323769 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" event={"ID":"b78e72e3-8ece-4d66-aa9c-25445bacdc99","Type":"ContainerStarted","Data":"7f3fc61d9433e4a7d56e81573eb626edd2106764ab8b801202688d1a24986dc2"} Aug 13 20:10:16 crc kubenswrapper[4183]: I0813 20:10:16.324092 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:16 crc kubenswrapper[4183]: I0813 20:10:16.363837 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" podStartSLOduration=1.363730948 podStartE2EDuration="1.363730948s" podCreationTimestamp="2025-08-13 20:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:10:16.360329401 +0000 UTC m=+1583.052994299" watchObservedRunningTime="2025-08-13 20:10:16.363730948 +0000 UTC m=+1583.056395666" Aug 13 20:10:17 crc kubenswrapper[4183]: I0813 20:10:17.407369 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:18 crc kubenswrapper[4183]: I0813 20:10:18.241296 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jx5m8"] Aug 13 20:10:19 crc kubenswrapper[4183]: I0813 20:10:19.343356 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" gracePeriod=30 Aug 13 20:10:25 crc kubenswrapper[4183]: E0813 20:10:25.615052 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:25 crc kubenswrapper[4183]: E0813 20:10:25.619515 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:25 crc kubenswrapper[4183]: E0813 20:10:25.621844 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:25 crc kubenswrapper[4183]: E0813 20:10:25.621965 4183 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:10:35 crc kubenswrapper[4183]: E0813 20:10:35.614950 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:35 crc kubenswrapper[4183]: E0813 20:10:35.617609 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:35 crc kubenswrapper[4183]: E0813 20:10:35.621472 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:35 crc kubenswrapper[4183]: E0813 20:10:35.621559 4183 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:10:45 crc kubenswrapper[4183]: E0813 20:10:45.618009 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:45 crc kubenswrapper[4183]: E0813 20:10:45.623908 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:45 crc kubenswrapper[4183]: E0813 20:10:45.626362 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:45 crc kubenswrapper[4183]: E0813 20:10:45.626486 4183 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.550765 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-jx5m8_b78e72e3-8ece-4d66-aa9c-25445bacdc99/kube-multus-additional-cni-plugins/0.log" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.550945 4183 generic.go:334] "Generic (PLEG): container finished" podID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" exitCode=137 Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.551009 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" event={"ID":"b78e72e3-8ece-4d66-aa9c-25445bacdc99","Type":"ContainerDied","Data":"e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646"} Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.551044 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" event={"ID":"b78e72e3-8ece-4d66-aa9c-25445bacdc99","Type":"ContainerDied","Data":"7f3fc61d9433e4a7d56e81573eb626edd2106764ab8b801202688d1a24986dc2"} Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.551075 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f3fc61d9433e4a7d56e81573eb626edd2106764ab8b801202688d1a24986dc2" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.584207 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-jx5m8_b78e72e3-8ece-4d66-aa9c-25445bacdc99/kube-multus-additional-cni-plugins/0.log" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.584448 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.706635 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir\") pod \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.706906 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "b78e72e3-8ece-4d66-aa9c-25445bacdc99" (UID: "b78e72e3-8ece-4d66-aa9c-25445bacdc99"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.707146 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25pz9\" (UniqueName: \"kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9\") pod \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.707314 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist\") pod \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.708152 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready" (OuterVolumeSpecName: "ready") pod "b78e72e3-8ece-4d66-aa9c-25445bacdc99" (UID: "b78e72e3-8ece-4d66-aa9c-25445bacdc99"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.708195 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "b78e72e3-8ece-4d66-aa9c-25445bacdc99" (UID: "b78e72e3-8ece-4d66-aa9c-25445bacdc99"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.707465 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready\") pod \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.708648 4183 reconciler_common.go:300] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.708672 4183 reconciler_common.go:300] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready\") on node \"crc\" DevicePath \"\"" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.708683 4183 reconciler_common.go:300] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.719169 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9" (OuterVolumeSpecName: "kube-api-access-25pz9") pod "b78e72e3-8ece-4d66-aa9c-25445bacdc99" (UID: "b78e72e3-8ece-4d66-aa9c-25445bacdc99"). InnerVolumeSpecName "kube-api-access-25pz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.810314 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-25pz9\" (UniqueName: \"kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9\") on node \"crc\" DevicePath \"\"" Aug 13 20:10:50 crc kubenswrapper[4183]: I0813 20:10:50.560008 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:50 crc kubenswrapper[4183]: I0813 20:10:50.605358 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jx5m8"] Aug 13 20:10:50 crc kubenswrapper[4183]: I0813 20:10:50.611870 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jx5m8"] Aug 13 20:10:51 crc kubenswrapper[4183]: I0813 20:10:51.217828 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" path="/var/lib/kubelet/pods/b78e72e3-8ece-4d66-aa9c-25445bacdc99/volumes" Aug 13 20:10:54 crc kubenswrapper[4183]: I0813 20:10:54.752861 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:10:54 crc kubenswrapper[4183]: I0813 20:10:54.753521 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:10:54 crc kubenswrapper[4183]: I0813 20:10:54.753599 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:10:54 crc kubenswrapper[4183]: I0813 20:10:54.753657 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:10:54 crc kubenswrapper[4183]: I0813 20:10:54.753739 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:10:55 crc kubenswrapper[4183]: I0813 20:10:55.709489 4183 scope.go:117] "RemoveContainer" containerID="da6e49e577c89776d78e03c12b1aa711de8c3b6ceb252a9c05b51d38a6e6fd8a" Aug 13 20:10:55 crc kubenswrapper[4183]: I0813 20:10:55.758106 4183 scope.go:117] "RemoveContainer" containerID="5b04274f5ebeb54ec142f28db67158b3f20014bf0046505512a20f576eb7c4b4" Aug 13 20:10:55 crc kubenswrapper[4183]: I0813 20:10:55.792646 4183 scope.go:117] "RemoveContainer" containerID="daf74224d04a5859b6f3ea7213d84dd41f91a9dfefadc077c041aabcb8247fdd" Aug 13 20:10:59 crc kubenswrapper[4183]: I0813 20:10:59.755707 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx"] Aug 13 20:10:59 crc kubenswrapper[4183]: I0813 20:10:59.756438 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" podUID="becc7e17-2bc7-417d-832f-55127299d70f" containerName="route-controller-manager" containerID="cri-o://764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75" gracePeriod=30 Aug 13 20:10:59 crc kubenswrapper[4183]: I0813 20:10:59.790837 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:10:59 crc kubenswrapper[4183]: I0813 20:10:59.791152 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" containerID="cri-o://3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8" gracePeriod=30 Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.353873 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.468116 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.469581 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca\") pod \"8b8d1c48-5762-450f-bd4d-9134869f432b\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.469685 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert\") pod \"8b8d1c48-5762-450f-bd4d-9134869f432b\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.469734 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles\") pod \"8b8d1c48-5762-450f-bd4d-9134869f432b\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.470165 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config\") pod \"8b8d1c48-5762-450f-bd4d-9134869f432b\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.470498 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spb98\" (UniqueName: \"kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98\") pod \"8b8d1c48-5762-450f-bd4d-9134869f432b\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.473699 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8b8d1c48-5762-450f-bd4d-9134869f432b" (UID: "8b8d1c48-5762-450f-bd4d-9134869f432b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.476019 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca" (OuterVolumeSpecName: "client-ca") pod "8b8d1c48-5762-450f-bd4d-9134869f432b" (UID: "8b8d1c48-5762-450f-bd4d-9134869f432b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.478873 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config" (OuterVolumeSpecName: "config") pod "8b8d1c48-5762-450f-bd4d-9134869f432b" (UID: "8b8d1c48-5762-450f-bd4d-9134869f432b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.487118 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8b8d1c48-5762-450f-bd4d-9134869f432b" (UID: "8b8d1c48-5762-450f-bd4d-9134869f432b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.490218 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98" (OuterVolumeSpecName: "kube-api-access-spb98") pod "8b8d1c48-5762-450f-bd4d-9134869f432b" (UID: "8b8d1c48-5762-450f-bd4d-9134869f432b"). InnerVolumeSpecName "kube-api-access-spb98". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.572528 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert\") pod \"becc7e17-2bc7-417d-832f-55127299d70f\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.572630 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca\") pod \"becc7e17-2bc7-417d-832f-55127299d70f\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.572681 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvfwr\" (UniqueName: \"kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr\") pod \"becc7e17-2bc7-417d-832f-55127299d70f\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.572732 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config\") pod \"becc7e17-2bc7-417d-832f-55127299d70f\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.573142 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.573163 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.573175 4183 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.573186 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.573198 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-spb98\" (UniqueName: \"kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.574269 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca" (OuterVolumeSpecName: "client-ca") pod "becc7e17-2bc7-417d-832f-55127299d70f" (UID: "becc7e17-2bc7-417d-832f-55127299d70f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.574419 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config" (OuterVolumeSpecName: "config") pod "becc7e17-2bc7-417d-832f-55127299d70f" (UID: "becc7e17-2bc7-417d-832f-55127299d70f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.578612 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr" (OuterVolumeSpecName: "kube-api-access-nvfwr") pod "becc7e17-2bc7-417d-832f-55127299d70f" (UID: "becc7e17-2bc7-417d-832f-55127299d70f"). InnerVolumeSpecName "kube-api-access-nvfwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.579214 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "becc7e17-2bc7-417d-832f-55127299d70f" (UID: "becc7e17-2bc7-417d-832f-55127299d70f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.631669 4183 generic.go:334] "Generic (PLEG): container finished" podID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerID="3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8" exitCode=0 Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.631834 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" event={"ID":"8b8d1c48-5762-450f-bd4d-9134869f432b","Type":"ContainerDied","Data":"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8"} Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.631841 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.631874 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" event={"ID":"8b8d1c48-5762-450f-bd4d-9134869f432b","Type":"ContainerDied","Data":"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb"} Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.632014 4183 scope.go:117] "RemoveContainer" containerID="3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.639087 4183 generic.go:334] "Generic (PLEG): container finished" podID="becc7e17-2bc7-417d-832f-55127299d70f" containerID="764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75" exitCode=0 Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.639175 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" event={"ID":"becc7e17-2bc7-417d-832f-55127299d70f","Type":"ContainerDied","Data":"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75"} Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.639256 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" event={"ID":"becc7e17-2bc7-417d-832f-55127299d70f","Type":"ContainerDied","Data":"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7"} Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.639536 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.674046 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.674428 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nvfwr\" (UniqueName: \"kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.674522 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.674622 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.718560 4183 scope.go:117] "RemoveContainer" containerID="3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8" Aug 13 20:11:00 crc kubenswrapper[4183]: E0813 20:11:00.719728 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8\": container with ID starting with 3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8 not found: ID does not exist" containerID="3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.720139 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8"} err="failed to get container status \"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8\": rpc error: code = NotFound desc = could not find container \"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8\": container with ID starting with 3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8 not found: ID does not exist" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.720430 4183 scope.go:117] "RemoveContainer" containerID="764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.775971 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.787427 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.790274 4183 scope.go:117] "RemoveContainer" containerID="764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75" Aug 13 20:11:00 crc kubenswrapper[4183]: E0813 20:11:00.793167 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75\": container with ID starting with 764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75 not found: ID does not exist" containerID="764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.793238 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75"} err="failed to get container status \"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75\": rpc error: code = NotFound desc = could not find container \"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75\": container with ID starting with 764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75 not found: ID does not exist" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.822961 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx"] Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.846342 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx"] Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.219888 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" path="/var/lib/kubelet/pods/8b8d1c48-5762-450f-bd4d-9134869f432b/volumes" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.220771 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="becc7e17-2bc7-417d-832f-55127299d70f" path="/var/lib/kubelet/pods/becc7e17-2bc7-417d-832f-55127299d70f/volumes" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.529530 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.529740 4183 topology_manager.go:215] "Topology Admit Handler" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" podNamespace="openshift-controller-manager" podName="controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: E0813 20:11:01.530159 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="becc7e17-2bc7-417d-832f-55127299d70f" containerName="route-controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530179 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="becc7e17-2bc7-417d-832f-55127299d70f" containerName="route-controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: E0813 20:11:01.530191 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530199 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:11:01 crc kubenswrapper[4183]: E0813 20:11:01.530215 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530222 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530383 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530400 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="becc7e17-2bc7-417d-832f-55127299d70f" containerName="route-controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530411 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.535306 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.535403 4183 topology_manager.go:215] "Topology Admit Handler" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.535706 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.536177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.545713 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.546083 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.546286 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.546479 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.546608 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.546723 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.548592 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.550836 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.553742 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.554245 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.554485 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.555215 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.572420 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.600311 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688129 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688249 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688301 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688335 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688738 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688877 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.689031 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.689097 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.689156 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.790450 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.792008 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.790906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793212 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793305 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793338 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793351 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793433 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793497 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793556 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793591 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.795037 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.795161 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.795292 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.806724 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.817740 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.832039 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.834455 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.860524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.888227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.196323 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.292702 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Aug 13 20:11:02 crc kubenswrapper[4183]: W0813 20:11:02.303249 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21d29937_debd_4407_b2b1_d1053cb0f342.slice/crio-c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88 WatchSource:0}: Error finding container c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88: Status 404 returned error can't find the container with id c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88 Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.667677 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerStarted","Data":"0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba"} Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.668407 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.670753 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerStarted","Data":"c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88"} Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.670864 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerStarted","Data":"de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe"} Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.670889 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerStarted","Data":"67a3c779a8c87e71b43d6cb834c45eddf91ef0c21c030e8ec0df8e8304073b3c"} Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.671078 4183 patch_prober.go:28] interesting pod/route-controller-manager-776b8b7477-sfpvs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" start-of-body= Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.671181 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.671541 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.673582 4183 patch_prober.go:28] interesting pod/controller-manager-778975cc4f-x5vcf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" start-of-body= Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.673645 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.701285 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podStartSLOduration=3.701183908 podStartE2EDuration="3.701183908s" podCreationTimestamp="2025-08-13 20:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:11:02.699009676 +0000 UTC m=+1629.391674674" watchObservedRunningTime="2025-08-13 20:11:02.701183908 +0000 UTC m=+1629.393848866" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.740758 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podStartSLOduration=3.740696931 podStartE2EDuration="3.740696931s" podCreationTimestamp="2025-08-13 20:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:11:02.739829186 +0000 UTC m=+1629.432494084" watchObservedRunningTime="2025-08-13 20:11:02.740696931 +0000 UTC m=+1629.433361929" Aug 13 20:11:03 crc kubenswrapper[4183]: I0813 20:11:03.682819 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:03 crc kubenswrapper[4183]: I0813 20:11:03.689194 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:54 crc kubenswrapper[4183]: I0813 20:11:54.755271 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:11:54 crc kubenswrapper[4183]: I0813 20:11:54.755913 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:11:54 crc kubenswrapper[4183]: I0813 20:11:54.756028 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:11:54 crc kubenswrapper[4183]: I0813 20:11:54.756079 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:11:54 crc kubenswrapper[4183]: I0813 20:11:54.756124 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:12:54 crc kubenswrapper[4183]: I0813 20:12:54.757243 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:12:54 crc kubenswrapper[4183]: I0813 20:12:54.758015 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:12:54 crc kubenswrapper[4183]: I0813 20:12:54.758059 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:12:54 crc kubenswrapper[4183]: I0813 20:12:54.758090 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:12:54 crc kubenswrapper[4183]: I0813 20:12:54.758135 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:12:55 crc kubenswrapper[4183]: I0813 20:12:55.917583 4183 scope.go:117] "RemoveContainer" containerID="be1e0c86831f89f585cd2c81563266389f6b99fe3a2b00e25563c193b7ae2289" Aug 13 20:12:55 crc kubenswrapper[4183]: I0813 20:12:55.959001 4183 scope.go:117] "RemoveContainer" containerID="6fac670aec99a6e895db54957107db545029859582d9e7bfff8bcb8b8323317b" Aug 13 20:12:56 crc kubenswrapper[4183]: I0813 20:12:56.001663 4183 scope.go:117] "RemoveContainer" containerID="4159ba877f8ff7e1e08f72bf3d12699149238f2597dfea0b4882ee6797fe2c98" Aug 13 20:12:56 crc kubenswrapper[4183]: I0813 20:12:56.041888 4183 scope.go:117] "RemoveContainer" containerID="844a16e08b8b6f6647fb07d6bae6657e732727da7ada45f1211b70ff85887202" Aug 13 20:13:54 crc kubenswrapper[4183]: I0813 20:13:54.759301 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:13:54 crc kubenswrapper[4183]: I0813 20:13:54.760034 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:13:54 crc kubenswrapper[4183]: I0813 20:13:54.760078 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:13:54 crc kubenswrapper[4183]: I0813 20:13:54.760115 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:13:54 crc kubenswrapper[4183]: I0813 20:13:54.760150 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:14:54 crc kubenswrapper[4183]: I0813 20:14:54.760866 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:14:54 crc kubenswrapper[4183]: I0813 20:14:54.761674 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:14:54 crc kubenswrapper[4183]: I0813 20:14:54.761741 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:14:54 crc kubenswrapper[4183]: I0813 20:14:54.761815 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:14:54 crc kubenswrapper[4183]: I0813 20:14:54.761868 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.374435 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j"] Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.374945 4183 topology_manager.go:215] "Topology Admit Handler" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.375673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.378592 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.379408 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.416621 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j"] Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.471537 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.472052 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf6f7\" (UniqueName: \"kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.472270 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.573741 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.574275 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wf6f7\" (UniqueName: \"kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.574554 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.576120 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.585446 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.598138 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf6f7\" (UniqueName: \"kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.699457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:01 crc kubenswrapper[4183]: I0813 20:15:01.025171 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j"] Aug 13 20:15:01 crc kubenswrapper[4183]: I0813 20:15:01.315680 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" event={"ID":"51936587-a4af-470d-ad92-8ab9062cbc72","Type":"ContainerStarted","Data":"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855"} Aug 13 20:15:02 crc kubenswrapper[4183]: I0813 20:15:02.324076 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" event={"ID":"51936587-a4af-470d-ad92-8ab9062cbc72","Type":"ContainerStarted","Data":"13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373"} Aug 13 20:15:02 crc kubenswrapper[4183]: I0813 20:15:02.375455 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" podStartSLOduration=2.375358886 podStartE2EDuration="2.375358886s" podCreationTimestamp="2025-08-13 20:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:15:02.373158483 +0000 UTC m=+1869.065823261" watchObservedRunningTime="2025-08-13 20:15:02.375358886 +0000 UTC m=+1869.068023744" Aug 13 20:15:03 crc kubenswrapper[4183]: I0813 20:15:03.334093 4183 generic.go:334] "Generic (PLEG): container finished" podID="51936587-a4af-470d-ad92-8ab9062cbc72" containerID="13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373" exitCode=0 Aug 13 20:15:03 crc kubenswrapper[4183]: I0813 20:15:03.334182 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" event={"ID":"51936587-a4af-470d-ad92-8ab9062cbc72","Type":"ContainerDied","Data":"13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373"} Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.645413 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.728715 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf6f7\" (UniqueName: \"kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7\") pod \"51936587-a4af-470d-ad92-8ab9062cbc72\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.728881 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume\") pod \"51936587-a4af-470d-ad92-8ab9062cbc72\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.728956 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume\") pod \"51936587-a4af-470d-ad92-8ab9062cbc72\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.730207 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume" (OuterVolumeSpecName: "config-volume") pod "51936587-a4af-470d-ad92-8ab9062cbc72" (UID: "51936587-a4af-470d-ad92-8ab9062cbc72"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.741647 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "51936587-a4af-470d-ad92-8ab9062cbc72" (UID: "51936587-a4af-470d-ad92-8ab9062cbc72"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.756593 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7" (OuterVolumeSpecName: "kube-api-access-wf6f7") pod "51936587-a4af-470d-ad92-8ab9062cbc72" (UID: "51936587-a4af-470d-ad92-8ab9062cbc72"). InnerVolumeSpecName "kube-api-access-wf6f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.830174 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wf6f7\" (UniqueName: \"kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7\") on node \"crc\" DevicePath \"\"" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.830264 4183 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.830278 4183 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:15:05 crc kubenswrapper[4183]: I0813 20:15:05.347352 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" event={"ID":"51936587-a4af-470d-ad92-8ab9062cbc72","Type":"ContainerDied","Data":"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855"} Aug 13 20:15:05 crc kubenswrapper[4183]: I0813 20:15:05.347776 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855" Aug 13 20:15:05 crc kubenswrapper[4183]: I0813 20:15:05.347539 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:54 crc kubenswrapper[4183]: I0813 20:15:54.762499 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:15:54 crc kubenswrapper[4183]: I0813 20:15:54.763520 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:15:54 crc kubenswrapper[4183]: I0813 20:15:54.763609 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:15:54 crc kubenswrapper[4183]: I0813 20:15:54.763646 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:15:54 crc kubenswrapper[4183]: I0813 20:15:54.763691 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:16:54 crc kubenswrapper[4183]: I0813 20:16:54.765066 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:16:54 crc kubenswrapper[4183]: I0813 20:16:54.766207 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:16:54 crc kubenswrapper[4183]: I0813 20:16:54.766249 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:16:54 crc kubenswrapper[4183]: I0813 20:16:54.766277 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:16:54 crc kubenswrapper[4183]: I0813 20:16:54.766315 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:16:56 crc kubenswrapper[4183]: I0813 20:16:56.146559 4183 scope.go:117] "RemoveContainer" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.193441 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.194055 4183 topology_manager.go:215] "Topology Admit Handler" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" podNamespace="openshift-marketplace" podName="certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: E0813 20:16:58.194328 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" containerName="collect-profiles" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.194342 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" containerName="collect-profiles" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.194512 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" containerName="collect-profiles" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.195638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.259855 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.389343 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.389447 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c56vw\" (UniqueName: \"kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.389506 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.490922 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-c56vw\" (UniqueName: \"kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.491109 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.491155 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.492075 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.492098 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.518036 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-c56vw\" (UniqueName: \"kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.521542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.870097 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:16:58 crc kubenswrapper[4183]: W0813 20:16:58.874840 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e241cc6_c71d_4fa0_9a1a_18098bcf6594.slice/crio-18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f WatchSource:0}: Error finding container 18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f: Status 404 returned error can't find the container with id 18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f Aug 13 20:16:59 crc kubenswrapper[4183]: I0813 20:16:59.093491 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerStarted","Data":"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f"} Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.103133 4183 generic.go:334] "Generic (PLEG): container finished" podID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerID="a859c58e4fdfbde98f0fc6b6dd5b6b351283c9a369a0cf1ca5981e6dffd1d537" exitCode=0 Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.103218 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerDied","Data":"a859c58e4fdfbde98f0fc6b6dd5b6b351283c9a369a0cf1ca5981e6dffd1d537"} Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.113335 4183 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.181024 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.181189 4183 topology_manager.go:215] "Topology Admit Handler" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" podNamespace="openshift-marketplace" podName="redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.185407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.265288 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.319177 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjvpg\" (UniqueName: \"kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.319326 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.319369 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.421284 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.421378 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.421424 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-sjvpg\" (UniqueName: \"kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.422439 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.422862 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.462297 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjvpg\" (UniqueName: \"kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.507167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:01 crc kubenswrapper[4183]: I0813 20:17:01.049659 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:01 crc kubenswrapper[4183]: W0813 20:17:01.065223 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda084eaff_10e9_439e_96f3_f3450fb14db7.slice/crio-95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439 WatchSource:0}: Error finding container 95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439: Status 404 returned error can't find the container with id 95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439 Aug 13 20:17:01 crc kubenswrapper[4183]: I0813 20:17:01.134559 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerStarted","Data":"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439"} Aug 13 20:17:02 crc kubenswrapper[4183]: I0813 20:17:02.145903 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerStarted","Data":"81e7ca605fef6f0437d478dbda9f87bc7944dc329f70a81183a2e1f06c2bae95"} Aug 13 20:17:02 crc kubenswrapper[4183]: I0813 20:17:02.151179 4183 generic.go:334] "Generic (PLEG): container finished" podID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerID="53f81688e5fd104f842edd52471938f4845344eecb7146cd6a01389e1136528a" exitCode=0 Aug 13 20:17:02 crc kubenswrapper[4183]: I0813 20:17:02.151240 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerDied","Data":"53f81688e5fd104f842edd52471938f4845344eecb7146cd6a01389e1136528a"} Aug 13 20:17:03 crc kubenswrapper[4183]: I0813 20:17:03.161241 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerStarted","Data":"c83a6ceb92ddb0c1bf7184148f9ba8f188093d3e9de859e304c76ea54c5ea5be"} Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.048838 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.049503 4183 topology_manager.go:215] "Topology Admit Handler" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" podNamespace="openshift-marketplace" podName="redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.050910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.077652 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48x8n\" (UniqueName: \"kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.078043 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.078266 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.179865 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.179991 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.180911 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-48x8n\" (UniqueName: \"kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.181460 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.181579 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.247450 4183 generic.go:334] "Generic (PLEG): container finished" podID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerID="81e7ca605fef6f0437d478dbda9f87bc7944dc329f70a81183a2e1f06c2bae95" exitCode=0 Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.247534 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerDied","Data":"81e7ca605fef6f0437d478dbda9f87bc7944dc329f70a81183a2e1f06c2bae95"} Aug 13 20:17:18 crc kubenswrapper[4183]: I0813 20:17:18.501218 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:17:19 crc kubenswrapper[4183]: I0813 20:17:19.268059 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerStarted","Data":"f31945f91f4930b964bb19c200a97bbe2d2d546d46ca69ecc3087aeaff8c4d57"} Aug 13 20:17:20 crc kubenswrapper[4183]: I0813 20:17:20.726525 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-48x8n\" (UniqueName: \"kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:20 crc kubenswrapper[4183]: I0813 20:17:20.882632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:21 crc kubenswrapper[4183]: I0813 20:17:21.156903 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8bbjz" podStartSLOduration=6.689642693 podStartE2EDuration="23.156848646s" podCreationTimestamp="2025-08-13 20:16:58 +0000 UTC" firstStartedPulling="2025-08-13 20:17:00.105515813 +0000 UTC m=+1986.798180411" lastFinishedPulling="2025-08-13 20:17:16.572721666 +0000 UTC m=+2003.265386364" observedRunningTime="2025-08-13 20:17:21.14682776 +0000 UTC m=+2007.839492668" watchObservedRunningTime="2025-08-13 20:17:21.156848646 +0000 UTC m=+2007.849513524" Aug 13 20:17:21 crc kubenswrapper[4183]: I0813 20:17:21.601317 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:17:22 crc kubenswrapper[4183]: I0813 20:17:22.294948 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerStarted","Data":"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5"} Aug 13 20:17:22 crc kubenswrapper[4183]: I0813 20:17:22.298131 4183 generic.go:334] "Generic (PLEG): container finished" podID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerID="c83a6ceb92ddb0c1bf7184148f9ba8f188093d3e9de859e304c76ea54c5ea5be" exitCode=0 Aug 13 20:17:22 crc kubenswrapper[4183]: I0813 20:17:22.298174 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerDied","Data":"c83a6ceb92ddb0c1bf7184148f9ba8f188093d3e9de859e304c76ea54c5ea5be"} Aug 13 20:17:24 crc kubenswrapper[4183]: I0813 20:17:24.318734 4183 generic.go:334] "Generic (PLEG): container finished" podID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerID="194af42a5001c99ae861a7524d09f26e2ac4df40b0aef4c0a94425791cba5661" exitCode=0 Aug 13 20:17:24 crc kubenswrapper[4183]: I0813 20:17:24.319078 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerDied","Data":"194af42a5001c99ae861a7524d09f26e2ac4df40b0aef4c0a94425791cba5661"} Aug 13 20:17:24 crc kubenswrapper[4183]: I0813 20:17:24.328164 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerStarted","Data":"e7f09b6d9d86854fd3cc30b6c65331b20aae92eab9c6d03b65f319607fa59aee"} Aug 13 20:17:25 crc kubenswrapper[4183]: I0813 20:17:25.786058 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nsk78" podStartSLOduration=5.345693387 podStartE2EDuration="25.786006691s" podCreationTimestamp="2025-08-13 20:17:00 +0000 UTC" firstStartedPulling="2025-08-13 20:17:02.153570299 +0000 UTC m=+1988.846235017" lastFinishedPulling="2025-08-13 20:17:22.593883603 +0000 UTC m=+2009.286548321" observedRunningTime="2025-08-13 20:17:25.781553214 +0000 UTC m=+2012.474217902" watchObservedRunningTime="2025-08-13 20:17:25.786006691 +0000 UTC m=+2012.478671639" Aug 13 20:17:26 crc kubenswrapper[4183]: I0813 20:17:26.348657 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerStarted","Data":"064b3140f95afe7c02e4fbe1840b217c2cf8563c4df0d72177d57a941d039783"} Aug 13 20:17:28 crc kubenswrapper[4183]: I0813 20:17:28.522411 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:28 crc kubenswrapper[4183]: I0813 20:17:28.522533 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:29 crc kubenswrapper[4183]: I0813 20:17:29.752257 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8bbjz" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="registry-server" probeResult="failure" output=< Aug 13 20:17:29 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:17:29 crc kubenswrapper[4183]: > Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.356548 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.357267 4183 topology_manager.go:215] "Topology Admit Handler" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" podNamespace="openshift-marketplace" podName="community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.359125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.397519 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.397720 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.397941 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j46mh\" (UniqueName: \"kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.465031 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.500349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.500478 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j46mh\" (UniqueName: \"kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.500571 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.501318 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.501491 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.508324 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.508371 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.580356 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j46mh\" (UniqueName: \"kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.687703 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.690202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:31 crc kubenswrapper[4183]: I0813 20:17:31.157708 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:17:31 crc kubenswrapper[4183]: I0813 20:17:31.386560 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerStarted","Data":"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790"} Aug 13 20:17:31 crc kubenswrapper[4183]: I0813 20:17:31.552454 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:32 crc kubenswrapper[4183]: I0813 20:17:32.398376 4183 generic.go:334] "Generic (PLEG): container finished" podID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerID="54a087bcecc2c6f5ffbb6af57b3c4e81ed60cca12c4ac0edb8fcbaed62dfc395" exitCode=0 Aug 13 20:17:32 crc kubenswrapper[4183]: I0813 20:17:32.400080 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerDied","Data":"54a087bcecc2c6f5ffbb6af57b3c4e81ed60cca12c4ac0edb8fcbaed62dfc395"} Aug 13 20:17:34 crc kubenswrapper[4183]: I0813 20:17:34.148460 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:34 crc kubenswrapper[4183]: I0813 20:17:34.149759 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nsk78" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="registry-server" containerID="cri-o://e7f09b6d9d86854fd3cc30b6c65331b20aae92eab9c6d03b65f319607fa59aee" gracePeriod=2 Aug 13 20:17:34 crc kubenswrapper[4183]: I0813 20:17:34.430402 4183 generic.go:334] "Generic (PLEG): container finished" podID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerID="e7f09b6d9d86854fd3cc30b6c65331b20aae92eab9c6d03b65f319607fa59aee" exitCode=0 Aug 13 20:17:34 crc kubenswrapper[4183]: I0813 20:17:34.430608 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerDied","Data":"e7f09b6d9d86854fd3cc30b6c65331b20aae92eab9c6d03b65f319607fa59aee"} Aug 13 20:17:34 crc kubenswrapper[4183]: I0813 20:17:34.436848 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerStarted","Data":"fee1587aa425cb6125597c6af788ac5a06d44abb5df280875e0d2b1624a81906"} Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.735554 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.779065 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content\") pod \"a084eaff-10e9-439e-96f3-f3450fb14db7\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.779167 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjvpg\" (UniqueName: \"kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg\") pod \"a084eaff-10e9-439e-96f3-f3450fb14db7\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.779255 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities\") pod \"a084eaff-10e9-439e-96f3-f3450fb14db7\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.780384 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities" (OuterVolumeSpecName: "utilities") pod "a084eaff-10e9-439e-96f3-f3450fb14db7" (UID: "a084eaff-10e9-439e-96f3-f3450fb14db7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.790133 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg" (OuterVolumeSpecName: "kube-api-access-sjvpg") pod "a084eaff-10e9-439e-96f3-f3450fb14db7" (UID: "a084eaff-10e9-439e-96f3-f3450fb14db7"). InnerVolumeSpecName "kube-api-access-sjvpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.880210 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sjvpg\" (UniqueName: \"kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.880249 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.912682 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a084eaff-10e9-439e-96f3-f3450fb14db7" (UID: "a084eaff-10e9-439e-96f3-f3450fb14db7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.981512 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.451597 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerDied","Data":"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439"} Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.451670 4183 scope.go:117] "RemoveContainer" containerID="e7f09b6d9d86854fd3cc30b6c65331b20aae92eab9c6d03b65f319607fa59aee" Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.451886 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.507084 4183 scope.go:117] "RemoveContainer" containerID="c83a6ceb92ddb0c1bf7184148f9ba8f188093d3e9de859e304c76ea54c5ea5be" Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.558206 4183 scope.go:117] "RemoveContainer" containerID="53f81688e5fd104f842edd52471938f4845344eecb7146cd6a01389e1136528a" Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.856002 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.946699 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:37 crc kubenswrapper[4183]: I0813 20:17:37.233945 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" path="/var/lib/kubelet/pods/a084eaff-10e9-439e-96f3-f3450fb14db7/volumes" Aug 13 20:17:38 crc kubenswrapper[4183]: I0813 20:17:38.703123 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:38 crc kubenswrapper[4183]: I0813 20:17:38.841230 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:39 crc kubenswrapper[4183]: I0813 20:17:39.170438 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:17:40 crc kubenswrapper[4183]: I0813 20:17:40.478207 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8bbjz" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="registry-server" containerID="cri-o://f31945f91f4930b964bb19c200a97bbe2d2d546d46ca69ecc3087aeaff8c4d57" gracePeriod=2 Aug 13 20:17:42 crc kubenswrapper[4183]: I0813 20:17:42.497339 4183 generic.go:334] "Generic (PLEG): container finished" podID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerID="f31945f91f4930b964bb19c200a97bbe2d2d546d46ca69ecc3087aeaff8c4d57" exitCode=0 Aug 13 20:17:42 crc kubenswrapper[4183]: I0813 20:17:42.497393 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerDied","Data":"f31945f91f4930b964bb19c200a97bbe2d2d546d46ca69ecc3087aeaff8c4d57"} Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.186627 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.285473 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content\") pod \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.286067 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities\") pod \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.286932 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities" (OuterVolumeSpecName: "utilities") pod "8e241cc6-c71d-4fa0-9a1a-18098bcf6594" (UID: "8e241cc6-c71d-4fa0-9a1a-18098bcf6594"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.287345 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c56vw\" (UniqueName: \"kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw\") pod \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.289686 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.294325 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw" (OuterVolumeSpecName: "kube-api-access-c56vw") pod "8e241cc6-c71d-4fa0-9a1a-18098bcf6594" (UID: "8e241cc6-c71d-4fa0-9a1a-18098bcf6594"). InnerVolumeSpecName "kube-api-access-c56vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.392494 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-c56vw\" (UniqueName: \"kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.511412 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerDied","Data":"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f"} Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.511496 4183 scope.go:117] "RemoveContainer" containerID="f31945f91f4930b964bb19c200a97bbe2d2d546d46ca69ecc3087aeaff8c4d57" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.511652 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.556128 4183 scope.go:117] "RemoveContainer" containerID="81e7ca605fef6f0437d478dbda9f87bc7944dc329f70a81183a2e1f06c2bae95" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.582229 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e241cc6-c71d-4fa0-9a1a-18098bcf6594" (UID: "8e241cc6-c71d-4fa0-9a1a-18098bcf6594"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.602192 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.645938 4183 scope.go:117] "RemoveContainer" containerID="a859c58e4fdfbde98f0fc6b6dd5b6b351283c9a369a0cf1ca5981e6dffd1d537" Aug 13 20:17:45 crc kubenswrapper[4183]: I0813 20:17:45.247674 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:17:45 crc kubenswrapper[4183]: I0813 20:17:45.309950 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:17:47 crc kubenswrapper[4183]: I0813 20:17:47.219237 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" path="/var/lib/kubelet/pods/8e241cc6-c71d-4fa0-9a1a-18098bcf6594/volumes" Aug 13 20:17:54 crc kubenswrapper[4183]: I0813 20:17:54.767616 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:17:54 crc kubenswrapper[4183]: I0813 20:17:54.768291 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:17:54 crc kubenswrapper[4183]: I0813 20:17:54.768440 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:17:54 crc kubenswrapper[4183]: I0813 20:17:54.768565 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:17:54 crc kubenswrapper[4183]: I0813 20:17:54.768832 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:18:21 crc kubenswrapper[4183]: I0813 20:18:21.790031 4183 generic.go:334] "Generic (PLEG): container finished" podID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerID="fee1587aa425cb6125597c6af788ac5a06d44abb5df280875e0d2b1624a81906" exitCode=0 Aug 13 20:18:21 crc kubenswrapper[4183]: I0813 20:18:21.790379 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerDied","Data":"fee1587aa425cb6125597c6af788ac5a06d44abb5df280875e0d2b1624a81906"} Aug 13 20:18:24 crc kubenswrapper[4183]: I0813 20:18:24.830046 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerStarted","Data":"9d0d4f9896e6c60385c01fe90548d89f3dfa99fc0c2cc45dfb29054b3acd6610"} Aug 13 20:18:28 crc kubenswrapper[4183]: I0813 20:18:28.667179 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tfv59" podStartSLOduration=8.94479276 podStartE2EDuration="58.667068725s" podCreationTimestamp="2025-08-13 20:17:30 +0000 UTC" firstStartedPulling="2025-08-13 20:17:32.401991306 +0000 UTC m=+2019.094655904" lastFinishedPulling="2025-08-13 20:18:22.124267171 +0000 UTC m=+2068.816931869" observedRunningTime="2025-08-13 20:18:28.658892431 +0000 UTC m=+2075.351557529" watchObservedRunningTime="2025-08-13 20:18:28.667068725 +0000 UTC m=+2075.359733513" Aug 13 20:18:30 crc kubenswrapper[4183]: I0813 20:18:30.691065 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:30 crc kubenswrapper[4183]: I0813 20:18:30.692101 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:31 crc kubenswrapper[4183]: I0813 20:18:31.812856 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tfv59" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" probeResult="failure" output=< Aug 13 20:18:31 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:18:31 crc kubenswrapper[4183]: > Aug 13 20:18:42 crc kubenswrapper[4183]: I0813 20:18:42.212915 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tfv59" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" probeResult="failure" output=< Aug 13 20:18:42 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:18:42 crc kubenswrapper[4183]: > Aug 13 20:18:50 crc kubenswrapper[4183]: I0813 20:18:50.817136 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:50 crc kubenswrapper[4183]: I0813 20:18:50.931347 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:51 crc kubenswrapper[4183]: I0813 20:18:51.204545 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:18:52 crc kubenswrapper[4183]: I0813 20:18:52.054359 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tfv59" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" containerID="cri-o://9d0d4f9896e6c60385c01fe90548d89f3dfa99fc0c2cc45dfb29054b3acd6610" gracePeriod=2 Aug 13 20:18:53 crc kubenswrapper[4183]: I0813 20:18:53.066555 4183 generic.go:334] "Generic (PLEG): container finished" podID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerID="9d0d4f9896e6c60385c01fe90548d89f3dfa99fc0c2cc45dfb29054b3acd6610" exitCode=0 Aug 13 20:18:53 crc kubenswrapper[4183]: I0813 20:18:53.066676 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerDied","Data":"9d0d4f9896e6c60385c01fe90548d89f3dfa99fc0c2cc45dfb29054b3acd6610"} Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.355104 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.503611 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j46mh\" (UniqueName: \"kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh\") pod \"718f06fe-dcad-4053-8de2-e2c38fb7503d\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.503694 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content\") pod \"718f06fe-dcad-4053-8de2-e2c38fb7503d\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.503871 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities\") pod \"718f06fe-dcad-4053-8de2-e2c38fb7503d\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.505841 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities" (OuterVolumeSpecName: "utilities") pod "718f06fe-dcad-4053-8de2-e2c38fb7503d" (UID: "718f06fe-dcad-4053-8de2-e2c38fb7503d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.511381 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh" (OuterVolumeSpecName: "kube-api-access-j46mh") pod "718f06fe-dcad-4053-8de2-e2c38fb7503d" (UID: "718f06fe-dcad-4053-8de2-e2c38fb7503d"). InnerVolumeSpecName "kube-api-access-j46mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.605134 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.605191 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j46mh\" (UniqueName: \"kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh\") on node \"crc\" DevicePath \"\"" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.772825 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.773000 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.773054 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.773115 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.773176 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.087090 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerDied","Data":"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790"} Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.087180 4183 scope.go:117] "RemoveContainer" containerID="9d0d4f9896e6c60385c01fe90548d89f3dfa99fc0c2cc45dfb29054b3acd6610" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.087336 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.132051 4183 scope.go:117] "RemoveContainer" containerID="fee1587aa425cb6125597c6af788ac5a06d44abb5df280875e0d2b1624a81906" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.155373 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "718f06fe-dcad-4053-8de2-e2c38fb7503d" (UID: "718f06fe-dcad-4053-8de2-e2c38fb7503d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.193463 4183 scope.go:117] "RemoveContainer" containerID="54a087bcecc2c6f5ffbb6af57b3c4e81ed60cca12c4ac0edb8fcbaed62dfc395" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.219316 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:18:56 crc kubenswrapper[4183]: I0813 20:18:56.533634 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:18:56 crc kubenswrapper[4183]: I0813 20:18:56.585294 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:18:57 crc kubenswrapper[4183]: I0813 20:18:57.218185 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" path="/var/lib/kubelet/pods/718f06fe-dcad-4053-8de2-e2c38fb7503d/volumes" Aug 13 20:18:59 crc kubenswrapper[4183]: I0813 20:18:59.120167 4183 generic.go:334] "Generic (PLEG): container finished" podID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerID="064b3140f95afe7c02e4fbe1840b217c2cf8563c4df0d72177d57a941d039783" exitCode=0 Aug 13 20:18:59 crc kubenswrapper[4183]: I0813 20:18:59.120258 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerDied","Data":"064b3140f95afe7c02e4fbe1840b217c2cf8563c4df0d72177d57a941d039783"} Aug 13 20:19:00 crc kubenswrapper[4183]: I0813 20:19:00.131839 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerStarted","Data":"6cccf520e993f65fe7f04eb2fcd6d00f74c6d2b2e0662a163738ba7ad2f433ca"} Aug 13 20:19:00 crc kubenswrapper[4183]: I0813 20:19:00.224845 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-swl5s" podStartSLOduration=11.079722633 podStartE2EDuration="1m46.221985722s" podCreationTimestamp="2025-08-13 20:17:14 +0000 UTC" firstStartedPulling="2025-08-13 20:17:24.321737916 +0000 UTC m=+2011.014402594" lastFinishedPulling="2025-08-13 20:18:59.464001005 +0000 UTC m=+2106.156665683" observedRunningTime="2025-08-13 20:19:00.220231852 +0000 UTC m=+2106.912896660" watchObservedRunningTime="2025-08-13 20:19:00.221985722 +0000 UTC m=+2106.914651530" Aug 13 20:19:00 crc kubenswrapper[4183]: I0813 20:19:00.883357 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:00 crc kubenswrapper[4183]: I0813 20:19:00.883456 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:01 crc kubenswrapper[4183]: I0813 20:19:01.993382 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-swl5s" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" probeResult="failure" output=< Aug 13 20:19:01 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:19:01 crc kubenswrapper[4183]: > Aug 13 20:19:12 crc kubenswrapper[4183]: I0813 20:19:12.039276 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-swl5s" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" probeResult="failure" output=< Aug 13 20:19:12 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:19:12 crc kubenswrapper[4183]: > Aug 13 20:19:21 crc kubenswrapper[4183]: I0813 20:19:21.985070 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-swl5s" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" probeResult="failure" output=< Aug 13 20:19:21 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:19:21 crc kubenswrapper[4183]: > Aug 13 20:19:31 crc kubenswrapper[4183]: I0813 20:19:31.006405 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:31 crc kubenswrapper[4183]: I0813 20:19:31.122567 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.138114 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.138918 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-swl5s" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" containerID="cri-o://6cccf520e993f65fe7f04eb2fcd6d00f74c6d2b2e0662a163738ba7ad2f433ca" gracePeriod=2 Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.397883 4183 generic.go:334] "Generic (PLEG): container finished" podID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerID="6cccf520e993f65fe7f04eb2fcd6d00f74c6d2b2e0662a163738ba7ad2f433ca" exitCode=0 Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.397948 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerDied","Data":"6cccf520e993f65fe7f04eb2fcd6d00f74c6d2b2e0662a163738ba7ad2f433ca"} Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.611367 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.735233 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content\") pod \"407a8505-ab64-42f9-aa53-a63f8e97c189\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.735402 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48x8n\" (UniqueName: \"kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n\") pod \"407a8505-ab64-42f9-aa53-a63f8e97c189\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.735463 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities\") pod \"407a8505-ab64-42f9-aa53-a63f8e97c189\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.736719 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities" (OuterVolumeSpecName: "utilities") pod "407a8505-ab64-42f9-aa53-a63f8e97c189" (UID: "407a8505-ab64-42f9-aa53-a63f8e97c189"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.742886 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n" (OuterVolumeSpecName: "kube-api-access-48x8n") pod "407a8505-ab64-42f9-aa53-a63f8e97c189" (UID: "407a8505-ab64-42f9-aa53-a63f8e97c189"). InnerVolumeSpecName "kube-api-access-48x8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.839950 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-48x8n\" (UniqueName: \"kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n\") on node \"crc\" DevicePath \"\"" Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.840044 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.415040 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerDied","Data":"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5"} Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.415089 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.415176 4183 scope.go:117] "RemoveContainer" containerID="6cccf520e993f65fe7f04eb2fcd6d00f74c6d2b2e0662a163738ba7ad2f433ca" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.479710 4183 scope.go:117] "RemoveContainer" containerID="064b3140f95afe7c02e4fbe1840b217c2cf8563c4df0d72177d57a941d039783" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.716961 4183 scope.go:117] "RemoveContainer" containerID="194af42a5001c99ae861a7524d09f26e2ac4df40b0aef4c0a94425791cba5661" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.736163 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "407a8505-ab64-42f9-aa53-a63f8e97c189" (UID: "407a8505-ab64-42f9-aa53-a63f8e97c189"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.764101 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:19:38 crc kubenswrapper[4183]: I0813 20:19:38.358735 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:19:38 crc kubenswrapper[4183]: I0813 20:19:38.604074 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:19:39 crc kubenswrapper[4183]: I0813 20:19:39.217381 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" path="/var/lib/kubelet/pods/407a8505-ab64-42f9-aa53-a63f8e97c189/volumes" Aug 13 20:19:54 crc kubenswrapper[4183]: I0813 20:19:54.774766 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:19:54 crc kubenswrapper[4183]: I0813 20:19:54.776105 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:19:54 crc kubenswrapper[4183]: I0813 20:19:54.776210 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:19:54 crc kubenswrapper[4183]: I0813 20:19:54.776267 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:19:54 crc kubenswrapper[4183]: I0813 20:19:54.776328 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:20:54 crc kubenswrapper[4183]: I0813 20:20:54.780947 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:20:54 crc kubenswrapper[4183]: I0813 20:20:54.781628 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:20:54 crc kubenswrapper[4183]: I0813 20:20:54.781725 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:20:54 crc kubenswrapper[4183]: I0813 20:20:54.781833 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:20:54 crc kubenswrapper[4183]: I0813 20:20:54.783726 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:21:54 crc kubenswrapper[4183]: I0813 20:21:54.784718 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:21:54 crc kubenswrapper[4183]: I0813 20:21:54.785676 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:21:54 crc kubenswrapper[4183]: I0813 20:21:54.785728 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:21:54 crc kubenswrapper[4183]: I0813 20:21:54.785858 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:21:54 crc kubenswrapper[4183]: I0813 20:21:54.786005 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:22:54 crc kubenswrapper[4183]: I0813 20:22:54.786811 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:22:54 crc kubenswrapper[4183]: I0813 20:22:54.787500 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:22:54 crc kubenswrapper[4183]: I0813 20:22:54.787549 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:22:54 crc kubenswrapper[4183]: I0813 20:22:54.787580 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:22:54 crc kubenswrapper[4183]: I0813 20:22:54.787616 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:23:54 crc kubenswrapper[4183]: I0813 20:23:54.788392 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:23:54 crc kubenswrapper[4183]: I0813 20:23:54.789243 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:23:54 crc kubenswrapper[4183]: I0813 20:23:54.789302 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:23:54 crc kubenswrapper[4183]: I0813 20:23:54.789353 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:23:54 crc kubenswrapper[4183]: I0813 20:23:54.789391 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:24:54 crc kubenswrapper[4183]: I0813 20:24:54.790268 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:24:54 crc kubenswrapper[4183]: I0813 20:24:54.791164 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:24:54 crc kubenswrapper[4183]: I0813 20:24:54.791235 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:24:54 crc kubenswrapper[4183]: I0813 20:24:54.791272 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:24:54 crc kubenswrapper[4183]: I0813 20:24:54.791350 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:25:54 crc kubenswrapper[4183]: I0813 20:25:54.792447 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:25:54 crc kubenswrapper[4183]: I0813 20:25:54.793238 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:25:54 crc kubenswrapper[4183]: I0813 20:25:54.793278 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:25:54 crc kubenswrapper[4183]: I0813 20:25:54.793314 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:25:54 crc kubenswrapper[4183]: I0813 20:25:54.793340 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:26:54 crc kubenswrapper[4183]: I0813 20:26:54.794075 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:26:54 crc kubenswrapper[4183]: I0813 20:26:54.794888 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:26:54 crc kubenswrapper[4183]: I0813 20:26:54.795014 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:26:54 crc kubenswrapper[4183]: I0813 20:26:54.795061 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:26:54 crc kubenswrapper[4183]: I0813 20:26:54.795093 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.681077 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.681897 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b152b92f-8fab-4b74-8e68-00278380759d" podNamespace="openshift-marketplace" podName="redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684542 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684698 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684728 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684735 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684752 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684759 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684841 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684867 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684880 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684887 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684898 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684908 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684918 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684925 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684937 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684944 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684955 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684962 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684975 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684982 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.685027 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685041 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.685052 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685059 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685448 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685487 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685502 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685512 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.686679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.725355 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.734441 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfrr6\" (UniqueName: \"kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.734624 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.734953 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.838250 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.836613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.838404 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-sfrr6\" (UniqueName: \"kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.838438 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.839029 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.843107 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.843256 4183 topology_manager.go:215] "Topology Admit Handler" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" podNamespace="openshift-marketplace" podName="certified-operators-xldzg" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.847188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.880146 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfrr6\" (UniqueName: \"kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.881068 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.941762 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.942067 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcz8g\" (UniqueName: \"kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.942116 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.012530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.043376 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tcz8g\" (UniqueName: \"kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.043470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.043535 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.044525 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.045458 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.083111 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcz8g\" (UniqueName: \"kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.172146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.522088 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.627904 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.815655 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerStarted","Data":"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e"} Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.817284 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerStarted","Data":"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0"} Aug 13 20:27:07 crc kubenswrapper[4183]: I0813 20:27:07.828702 4183 generic.go:334] "Generic (PLEG): container finished" podID="926ac7a4-e156-4e71-9681-7a48897402eb" containerID="de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc" exitCode=0 Aug 13 20:27:07 crc kubenswrapper[4183]: I0813 20:27:07.828899 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerDied","Data":"de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc"} Aug 13 20:27:07 crc kubenswrapper[4183]: I0813 20:27:07.833166 4183 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Aug 13 20:27:07 crc kubenswrapper[4183]: I0813 20:27:07.834515 4183 generic.go:334] "Generic (PLEG): container finished" podID="b152b92f-8fab-4b74-8e68-00278380759d" containerID="2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331" exitCode=0 Aug 13 20:27:07 crc kubenswrapper[4183]: I0813 20:27:07.834677 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerDied","Data":"2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331"} Aug 13 20:27:08 crc kubenswrapper[4183]: I0813 20:27:08.846077 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerStarted","Data":"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286"} Aug 13 20:27:08 crc kubenswrapper[4183]: I0813 20:27:08.849557 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerStarted","Data":"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5"} Aug 13 20:27:15 crc kubenswrapper[4183]: I0813 20:27:15.932398 4183 generic.go:334] "Generic (PLEG): container finished" podID="b152b92f-8fab-4b74-8e68-00278380759d" containerID="ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286" exitCode=0 Aug 13 20:27:15 crc kubenswrapper[4183]: I0813 20:27:15.932496 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerDied","Data":"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286"} Aug 13 20:27:17 crc kubenswrapper[4183]: I0813 20:27:17.952187 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerStarted","Data":"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032"} Aug 13 20:27:18 crc kubenswrapper[4183]: I0813 20:27:18.623429 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jbzn9" podStartSLOduration=5.083855925 podStartE2EDuration="13.623333743s" podCreationTimestamp="2025-08-13 20:27:05 +0000 UTC" firstStartedPulling="2025-08-13 20:27:07.836421672 +0000 UTC m=+2594.529086440" lastFinishedPulling="2025-08-13 20:27:16.37589966 +0000 UTC m=+2603.068564258" observedRunningTime="2025-08-13 20:27:18.616155369 +0000 UTC m=+2605.308820377" watchObservedRunningTime="2025-08-13 20:27:18.623333743 +0000 UTC m=+2605.315998621" Aug 13 20:27:18 crc kubenswrapper[4183]: I0813 20:27:18.966283 4183 generic.go:334] "Generic (PLEG): container finished" podID="926ac7a4-e156-4e71-9681-7a48897402eb" containerID="b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5" exitCode=0 Aug 13 20:27:18 crc kubenswrapper[4183]: I0813 20:27:18.966964 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerDied","Data":"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5"} Aug 13 20:27:19 crc kubenswrapper[4183]: I0813 20:27:19.985472 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerStarted","Data":"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418"} Aug 13 20:27:20 crc kubenswrapper[4183]: I0813 20:27:20.034729 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xldzg" podStartSLOduration=3.500986964 podStartE2EDuration="15.034677739s" podCreationTimestamp="2025-08-13 20:27:05 +0000 UTC" firstStartedPulling="2025-08-13 20:27:07.832168011 +0000 UTC m=+2594.524832719" lastFinishedPulling="2025-08-13 20:27:19.365858876 +0000 UTC m=+2606.058523494" observedRunningTime="2025-08-13 20:27:20.028528893 +0000 UTC m=+2606.721193801" watchObservedRunningTime="2025-08-13 20:27:20.034677739 +0000 UTC m=+2606.727342477" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.013496 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.015469 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.171177 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.173954 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.174409 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.312207 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:27 crc kubenswrapper[4183]: I0813 20:27:27.173669 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:27 crc kubenswrapper[4183]: I0813 20:27:27.174635 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:27 crc kubenswrapper[4183]: I0813 20:27:27.267615 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:27 crc kubenswrapper[4183]: I0813 20:27:27.431673 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.069858 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jbzn9" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="registry-server" containerID="cri-o://7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032" gracePeriod=2 Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.070204 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xldzg" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="registry-server" containerID="cri-o://88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418" gracePeriod=2 Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.551734 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.565636 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.706074 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content\") pod \"926ac7a4-e156-4e71-9681-7a48897402eb\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.706587 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcz8g\" (UniqueName: \"kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g\") pod \"926ac7a4-e156-4e71-9681-7a48897402eb\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.706991 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities\") pod \"b152b92f-8fab-4b74-8e68-00278380759d\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.707191 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities\") pod \"926ac7a4-e156-4e71-9681-7a48897402eb\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.707319 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfrr6\" (UniqueName: \"kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6\") pod \"b152b92f-8fab-4b74-8e68-00278380759d\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.707465 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content\") pod \"b152b92f-8fab-4b74-8e68-00278380759d\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.707537 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities" (OuterVolumeSpecName: "utilities") pod "b152b92f-8fab-4b74-8e68-00278380759d" (UID: "b152b92f-8fab-4b74-8e68-00278380759d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.707757 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities" (OuterVolumeSpecName: "utilities") pod "926ac7a4-e156-4e71-9681-7a48897402eb" (UID: "926ac7a4-e156-4e71-9681-7a48897402eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.708134 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.708253 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.714867 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g" (OuterVolumeSpecName: "kube-api-access-tcz8g") pod "926ac7a4-e156-4e71-9681-7a48897402eb" (UID: "926ac7a4-e156-4e71-9681-7a48897402eb"). InnerVolumeSpecName "kube-api-access-tcz8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.715290 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6" (OuterVolumeSpecName: "kube-api-access-sfrr6") pod "b152b92f-8fab-4b74-8e68-00278380759d" (UID: "b152b92f-8fab-4b74-8e68-00278380759d"). InnerVolumeSpecName "kube-api-access-sfrr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.810096 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sfrr6\" (UniqueName: \"kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.810149 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tcz8g\" (UniqueName: \"kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.846204 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b152b92f-8fab-4b74-8e68-00278380759d" (UID: "b152b92f-8fab-4b74-8e68-00278380759d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.911927 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.944382 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "926ac7a4-e156-4e71-9681-7a48897402eb" (UID: "926ac7a4-e156-4e71-9681-7a48897402eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.013927 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.078332 4183 generic.go:334] "Generic (PLEG): container finished" podID="b152b92f-8fab-4b74-8e68-00278380759d" containerID="7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032" exitCode=0 Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.078431 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerDied","Data":"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032"} Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.078464 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerDied","Data":"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0"} Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.078506 4183 scope.go:117] "RemoveContainer" containerID="7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.078669 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.087593 4183 generic.go:334] "Generic (PLEG): container finished" podID="926ac7a4-e156-4e71-9681-7a48897402eb" containerID="88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418" exitCode=0 Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.087681 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerDied","Data":"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418"} Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.087736 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerDied","Data":"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e"} Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.089151 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.155105 4183 scope.go:117] "RemoveContainer" containerID="ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.230393 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.247602 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.259374 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.266146 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.278132 4183 scope.go:117] "RemoveContainer" containerID="2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.324392 4183 scope.go:117] "RemoveContainer" containerID="7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.326065 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032\": container with ID starting with 7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032 not found: ID does not exist" containerID="7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.326155 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032"} err="failed to get container status \"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032\": rpc error: code = NotFound desc = could not find container \"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032\": container with ID starting with 7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032 not found: ID does not exist" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.326184 4183 scope.go:117] "RemoveContainer" containerID="ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.327105 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286\": container with ID starting with ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286 not found: ID does not exist" containerID="ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.327149 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286"} err="failed to get container status \"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286\": rpc error: code = NotFound desc = could not find container \"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286\": container with ID starting with ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286 not found: ID does not exist" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.327166 4183 scope.go:117] "RemoveContainer" containerID="2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.327955 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331\": container with ID starting with 2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331 not found: ID does not exist" containerID="2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.328062 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331"} err="failed to get container status \"2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331\": rpc error: code = NotFound desc = could not find container \"2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331\": container with ID starting with 2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331 not found: ID does not exist" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.328084 4183 scope.go:117] "RemoveContainer" containerID="88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.363618 4183 scope.go:117] "RemoveContainer" containerID="b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.424486 4183 scope.go:117] "RemoveContainer" containerID="de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.478357 4183 scope.go:117] "RemoveContainer" containerID="88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.479580 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418\": container with ID starting with 88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418 not found: ID does not exist" containerID="88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.479858 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418"} err="failed to get container status \"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418\": rpc error: code = NotFound desc = could not find container \"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418\": container with ID starting with 88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418 not found: ID does not exist" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.479883 4183 scope.go:117] "RemoveContainer" containerID="b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.480605 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5\": container with ID starting with b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5 not found: ID does not exist" containerID="b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.480680 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5"} err="failed to get container status \"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5\": rpc error: code = NotFound desc = could not find container \"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5\": container with ID starting with b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5 not found: ID does not exist" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.480697 4183 scope.go:117] "RemoveContainer" containerID="de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.481149 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc\": container with ID starting with de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc not found: ID does not exist" containerID="de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.481210 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc"} err="failed to get container status \"de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc\": rpc error: code = NotFound desc = could not find container \"de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc\": container with ID starting with de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc not found: ID does not exist" Aug 13 20:27:31 crc kubenswrapper[4183]: I0813 20:27:31.218427 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" path="/var/lib/kubelet/pods/926ac7a4-e156-4e71-9681-7a48897402eb/volumes" Aug 13 20:27:31 crc kubenswrapper[4183]: I0813 20:27:31.219874 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b152b92f-8fab-4b74-8e68-00278380759d" path="/var/lib/kubelet/pods/b152b92f-8fab-4b74-8e68-00278380759d/volumes" Aug 13 20:27:54 crc kubenswrapper[4183]: I0813 20:27:54.796855 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:27:54 crc kubenswrapper[4183]: I0813 20:27:54.797488 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:27:54 crc kubenswrapper[4183]: I0813 20:27:54.797527 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:27:54 crc kubenswrapper[4183]: I0813 20:27:54.797558 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:27:54 crc kubenswrapper[4183]: I0813 20:27:54.797597 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.324677 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.325567 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" podNamespace="openshift-marketplace" podName="community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.325926 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="extract-content" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.325946 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="extract-content" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.325959 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.325966 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.325982 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="extract-content" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.325989 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="extract-content" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.326029 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="extract-utilities" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.326047 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="extract-utilities" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.326063 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="extract-utilities" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.326072 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="extract-utilities" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.326125 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.326136 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.326308 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.326322 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.327661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.360377 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.377401 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.377601 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.378243 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4wdz\" (UniqueName: \"kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.479200 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.479349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j4wdz\" (UniqueName: \"kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.479405 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.480311 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.480353 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.516418 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4wdz\" (UniqueName: \"kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.659547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:44 crc kubenswrapper[4183]: I0813 20:28:44.064674 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:28:44 crc kubenswrapper[4183]: I0813 20:28:44.629205 4183 generic.go:334] "Generic (PLEG): container finished" podID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerID="e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef" exitCode=0 Aug 13 20:28:44 crc kubenswrapper[4183]: I0813 20:28:44.630049 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerDied","Data":"e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef"} Aug 13 20:28:44 crc kubenswrapper[4183]: I0813 20:28:44.630922 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerStarted","Data":"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af"} Aug 13 20:28:45 crc kubenswrapper[4183]: I0813 20:28:45.657598 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerStarted","Data":"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519"} Aug 13 20:28:54 crc kubenswrapper[4183]: I0813 20:28:54.798527 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:28:54 crc kubenswrapper[4183]: I0813 20:28:54.799512 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:28:54 crc kubenswrapper[4183]: I0813 20:28:54.799589 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:28:54 crc kubenswrapper[4183]: I0813 20:28:54.799642 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:28:54 crc kubenswrapper[4183]: I0813 20:28:54.799690 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:28:57 crc kubenswrapper[4183]: I0813 20:28:57.754900 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerDied","Data":"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519"} Aug 13 20:28:57 crc kubenswrapper[4183]: I0813 20:28:57.754912 4183 generic.go:334] "Generic (PLEG): container finished" podID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerID="e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519" exitCode=0 Aug 13 20:28:59 crc kubenswrapper[4183]: I0813 20:28:59.779256 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerStarted","Data":"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc"} Aug 13 20:28:59 crc kubenswrapper[4183]: I0813 20:28:59.823743 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hvwvm" podStartSLOduration=3.175837032 podStartE2EDuration="16.823670146s" podCreationTimestamp="2025-08-13 20:28:43 +0000 UTC" firstStartedPulling="2025-08-13 20:28:44.639101497 +0000 UTC m=+2691.331766095" lastFinishedPulling="2025-08-13 20:28:58.286934521 +0000 UTC m=+2704.979599209" observedRunningTime="2025-08-13 20:28:59.820758222 +0000 UTC m=+2706.513422960" watchObservedRunningTime="2025-08-13 20:28:59.823670146 +0000 UTC m=+2706.516334874" Aug 13 20:29:03 crc kubenswrapper[4183]: I0813 20:29:03.660115 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:03 crc kubenswrapper[4183]: I0813 20:29:03.660963 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:03 crc kubenswrapper[4183]: I0813 20:29:03.780392 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:03 crc kubenswrapper[4183]: I0813 20:29:03.914752 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:03 crc kubenswrapper[4183]: I0813 20:29:03.990443 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:29:05 crc kubenswrapper[4183]: I0813 20:29:05.815902 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hvwvm" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="registry-server" containerID="cri-o://133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc" gracePeriod=2 Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.270104 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.449566 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4wdz\" (UniqueName: \"kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz\") pod \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.450180 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content\") pod \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.450371 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities\") pod \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.451196 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities" (OuterVolumeSpecName: "utilities") pod "bfb8fd54-a923-43fe-a0f5-bc4066352d71" (UID: "bfb8fd54-a923-43fe-a0f5-bc4066352d71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.457914 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz" (OuterVolumeSpecName: "kube-api-access-j4wdz") pod "bfb8fd54-a923-43fe-a0f5-bc4066352d71" (UID: "bfb8fd54-a923-43fe-a0f5-bc4066352d71"). InnerVolumeSpecName "kube-api-access-j4wdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.551885 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.551946 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j4wdz\" (UniqueName: \"kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz\") on node \"crc\" DevicePath \"\"" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.831648 4183 generic.go:334] "Generic (PLEG): container finished" podID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerID="133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc" exitCode=0 Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.831920 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerDied","Data":"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc"} Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.831997 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerDied","Data":"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af"} Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.832103 4183 scope.go:117] "RemoveContainer" containerID="133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.832179 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.886425 4183 scope.go:117] "RemoveContainer" containerID="e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.958360 4183 scope.go:117] "RemoveContainer" containerID="e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.001299 4183 scope.go:117] "RemoveContainer" containerID="133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc" Aug 13 20:29:07 crc kubenswrapper[4183]: E0813 20:29:07.002724 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc\": container with ID starting with 133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc not found: ID does not exist" containerID="133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.002860 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc"} err="failed to get container status \"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc\": rpc error: code = NotFound desc = could not find container \"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc\": container with ID starting with 133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc not found: ID does not exist" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.002883 4183 scope.go:117] "RemoveContainer" containerID="e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519" Aug 13 20:29:07 crc kubenswrapper[4183]: E0813 20:29:07.003455 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519\": container with ID starting with e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519 not found: ID does not exist" containerID="e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.003521 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519"} err="failed to get container status \"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519\": rpc error: code = NotFound desc = could not find container \"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519\": container with ID starting with e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519 not found: ID does not exist" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.003548 4183 scope.go:117] "RemoveContainer" containerID="e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef" Aug 13 20:29:07 crc kubenswrapper[4183]: E0813 20:29:07.004426 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef\": container with ID starting with e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef not found: ID does not exist" containerID="e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.004459 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef"} err="failed to get container status \"e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef\": rpc error: code = NotFound desc = could not find container \"e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef\": container with ID starting with e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef not found: ID does not exist" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.133046 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfb8fd54-a923-43fe-a0f5-bc4066352d71" (UID: "bfb8fd54-a923-43fe-a0f5-bc4066352d71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.159478 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.474406 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.488264 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:29:09 crc kubenswrapper[4183]: I0813 20:29:09.217193 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" path="/var/lib/kubelet/pods/bfb8fd54-a923-43fe-a0f5-bc4066352d71/volumes" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.105720 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.106596 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" podNamespace="openshift-marketplace" podName="redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: E0813 20:29:30.106870 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="extract-utilities" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.106886 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="extract-utilities" Aug 13 20:29:30 crc kubenswrapper[4183]: E0813 20:29:30.106898 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="extract-content" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.106906 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="extract-content" Aug 13 20:29:30 crc kubenswrapper[4183]: E0813 20:29:30.106923 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="registry-server" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.106932 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="registry-server" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.107125 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="registry-server" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.115316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.142749 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.293194 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.293265 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.293294 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6rj8\" (UniqueName: \"kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.394671 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.395277 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r6rj8\" (UniqueName: \"kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.395684 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.396060 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.396737 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.439308 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6rj8\" (UniqueName: \"kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.443745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.797719 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:29:31 crc kubenswrapper[4183]: I0813 20:29:31.010510 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerStarted","Data":"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664"} Aug 13 20:29:32 crc kubenswrapper[4183]: I0813 20:29:32.020856 4183 generic.go:334] "Generic (PLEG): container finished" podID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerID="a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa" exitCode=0 Aug 13 20:29:32 crc kubenswrapper[4183]: I0813 20:29:32.021000 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerDied","Data":"a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa"} Aug 13 20:29:33 crc kubenswrapper[4183]: I0813 20:29:33.030834 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerStarted","Data":"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29"} Aug 13 20:29:54 crc kubenswrapper[4183]: I0813 20:29:54.801138 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:29:54 crc kubenswrapper[4183]: I0813 20:29:54.802303 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:29:54 crc kubenswrapper[4183]: I0813 20:29:54.802388 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:29:54 crc kubenswrapper[4183]: I0813 20:29:54.802449 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:29:54 crc kubenswrapper[4183]: I0813 20:29:54.802499 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:30:01 crc kubenswrapper[4183]: I0813 20:30:01.984271 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd"] Aug 13 20:30:01 crc kubenswrapper[4183]: I0813 20:30:01.985070 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ad171c4b-8408-4370-8e86-502999788ddb" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251950-x8jjd" Aug 13 20:30:01 crc kubenswrapper[4183]: I0813 20:30:01.985900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.008184 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.008444 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.036942 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd"] Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.076386 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.076843 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.077488 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmlcw\" (UniqueName: \"kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.179277 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.179382 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.179452 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pmlcw\" (UniqueName: \"kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.180707 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.190825 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.218103 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmlcw\" (UniqueName: \"kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.322129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.812554 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd"] Aug 13 20:30:03 crc kubenswrapper[4183]: I0813 20:30:03.273725 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" event={"ID":"ad171c4b-8408-4370-8e86-502999788ddb","Type":"ContainerStarted","Data":"67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89"} Aug 13 20:30:03 crc kubenswrapper[4183]: I0813 20:30:03.273834 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" event={"ID":"ad171c4b-8408-4370-8e86-502999788ddb","Type":"ContainerStarted","Data":"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9"} Aug 13 20:30:03 crc kubenswrapper[4183]: I0813 20:30:03.327749 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" podStartSLOduration=2.327674238 podStartE2EDuration="2.327674238s" podCreationTimestamp="2025-08-13 20:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:30:03.323089886 +0000 UTC m=+2770.015754874" watchObservedRunningTime="2025-08-13 20:30:03.327674238 +0000 UTC m=+2770.020338866" Aug 13 20:30:05 crc kubenswrapper[4183]: I0813 20:30:05.290513 4183 generic.go:334] "Generic (PLEG): container finished" podID="ad171c4b-8408-4370-8e86-502999788ddb" containerID="67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89" exitCode=0 Aug 13 20:30:05 crc kubenswrapper[4183]: I0813 20:30:05.290622 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" event={"ID":"ad171c4b-8408-4370-8e86-502999788ddb","Type":"ContainerDied","Data":"67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89"} Aug 13 20:30:06 crc kubenswrapper[4183]: I0813 20:30:06.889910 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:06 crc kubenswrapper[4183]: I0813 20:30:06.968429 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume\") pod \"ad171c4b-8408-4370-8e86-502999788ddb\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " Aug 13 20:30:06 crc kubenswrapper[4183]: I0813 20:30:06.969155 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmlcw\" (UniqueName: \"kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw\") pod \"ad171c4b-8408-4370-8e86-502999788ddb\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " Aug 13 20:30:06 crc kubenswrapper[4183]: I0813 20:30:06.969974 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume\") pod \"ad171c4b-8408-4370-8e86-502999788ddb\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " Aug 13 20:30:06 crc kubenswrapper[4183]: I0813 20:30:06.972559 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume" (OuterVolumeSpecName: "config-volume") pod "ad171c4b-8408-4370-8e86-502999788ddb" (UID: "ad171c4b-8408-4370-8e86-502999788ddb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.000000 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ad171c4b-8408-4370-8e86-502999788ddb" (UID: "ad171c4b-8408-4370-8e86-502999788ddb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.001682 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw" (OuterVolumeSpecName: "kube-api-access-pmlcw") pod "ad171c4b-8408-4370-8e86-502999788ddb" (UID: "ad171c4b-8408-4370-8e86-502999788ddb"). InnerVolumeSpecName "kube-api-access-pmlcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.073397 4183 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.073542 4183 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.073637 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pmlcw\" (UniqueName: \"kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.307944 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.308046 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" event={"ID":"ad171c4b-8408-4370-8e86-502999788ddb","Type":"ContainerDied","Data":"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9"} Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.309566 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.313402 4183 generic.go:334] "Generic (PLEG): container finished" podID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerID="dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29" exitCode=0 Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.314010 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerDied","Data":"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29"} Aug 13 20:30:08 crc kubenswrapper[4183]: I0813 20:30:08.188369 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9"] Aug 13 20:30:08 crc kubenswrapper[4183]: I0813 20:30:08.202397 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9"] Aug 13 20:30:08 crc kubenswrapper[4183]: I0813 20:30:08.323625 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerStarted","Data":"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e"} Aug 13 20:30:08 crc kubenswrapper[4183]: I0813 20:30:08.376959 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zdwjn" podStartSLOduration=2.644749574 podStartE2EDuration="38.376906603s" podCreationTimestamp="2025-08-13 20:29:30 +0000 UTC" firstStartedPulling="2025-08-13 20:29:32.023072954 +0000 UTC m=+2738.715737712" lastFinishedPulling="2025-08-13 20:30:07.755230113 +0000 UTC m=+2774.447894741" observedRunningTime="2025-08-13 20:30:08.369449078 +0000 UTC m=+2775.062113856" watchObservedRunningTime="2025-08-13 20:30:08.376906603 +0000 UTC m=+2775.069571331" Aug 13 20:30:09 crc kubenswrapper[4183]: I0813 20:30:09.217942 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" path="/var/lib/kubelet/pods/8500d7bd-50fb-4ca6-af41-b7a24cae43cd/volumes" Aug 13 20:30:10 crc kubenswrapper[4183]: I0813 20:30:10.444935 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:10 crc kubenswrapper[4183]: I0813 20:30:10.445312 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:11 crc kubenswrapper[4183]: I0813 20:30:11.559391 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zdwjn" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" probeResult="failure" output=< Aug 13 20:30:11 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:30:11 crc kubenswrapper[4183]: > Aug 13 20:30:21 crc kubenswrapper[4183]: I0813 20:30:21.571657 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zdwjn" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" probeResult="failure" output=< Aug 13 20:30:21 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:30:21 crc kubenswrapper[4183]: > Aug 13 20:30:30 crc kubenswrapper[4183]: I0813 20:30:30.639012 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:30 crc kubenswrapper[4183]: I0813 20:30:30.789286 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:30 crc kubenswrapper[4183]: I0813 20:30:30.862664 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.506496 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zdwjn" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" containerID="cri-o://7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e" gracePeriod=2 Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.931506 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.984564 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content\") pod \"6d579e1a-3b27-4c1f-9175-42ac58490d42\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.984743 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities\") pod \"6d579e1a-3b27-4c1f-9175-42ac58490d42\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.984919 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6rj8\" (UniqueName: \"kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8\") pod \"6d579e1a-3b27-4c1f-9175-42ac58490d42\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.987281 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities" (OuterVolumeSpecName: "utilities") pod "6d579e1a-3b27-4c1f-9175-42ac58490d42" (UID: "6d579e1a-3b27-4c1f-9175-42ac58490d42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.995193 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8" (OuterVolumeSpecName: "kube-api-access-r6rj8") pod "6d579e1a-3b27-4c1f-9175-42ac58490d42" (UID: "6d579e1a-3b27-4c1f-9175-42ac58490d42"). InnerVolumeSpecName "kube-api-access-r6rj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.086897 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.087266 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r6rj8\" (UniqueName: \"kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.521199 4183 generic.go:334] "Generic (PLEG): container finished" podID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerID="7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e" exitCode=0 Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.521250 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerDied","Data":"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e"} Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.521283 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerDied","Data":"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664"} Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.521312 4183 scope.go:117] "RemoveContainer" containerID="7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.521409 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.589471 4183 scope.go:117] "RemoveContainer" containerID="dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.818192 4183 scope.go:117] "RemoveContainer" containerID="a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.892207 4183 scope.go:117] "RemoveContainer" containerID="7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e" Aug 13 20:30:33 crc kubenswrapper[4183]: E0813 20:30:33.897265 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e\": container with ID starting with 7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e not found: ID does not exist" containerID="7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.897391 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e"} err="failed to get container status \"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e\": rpc error: code = NotFound desc = could not find container \"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e\": container with ID starting with 7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e not found: ID does not exist" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.897418 4183 scope.go:117] "RemoveContainer" containerID="dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29" Aug 13 20:30:33 crc kubenswrapper[4183]: E0813 20:30:33.898541 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29\": container with ID starting with dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29 not found: ID does not exist" containerID="dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.898707 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29"} err="failed to get container status \"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29\": rpc error: code = NotFound desc = could not find container \"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29\": container with ID starting with dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29 not found: ID does not exist" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.898943 4183 scope.go:117] "RemoveContainer" containerID="a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa" Aug 13 20:30:33 crc kubenswrapper[4183]: E0813 20:30:33.899705 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa\": container with ID starting with a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa not found: ID does not exist" containerID="a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.899762 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa"} err="failed to get container status \"a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa\": rpc error: code = NotFound desc = could not find container \"a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa\": container with ID starting with a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa not found: ID does not exist" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.930635 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d579e1a-3b27-4c1f-9175-42ac58490d42" (UID: "6d579e1a-3b27-4c1f-9175-42ac58490d42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:30:34 crc kubenswrapper[4183]: I0813 20:30:34.008519 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:34 crc kubenswrapper[4183]: I0813 20:30:34.175424 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:30:34 crc kubenswrapper[4183]: I0813 20:30:34.188387 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:30:35 crc kubenswrapper[4183]: I0813 20:30:35.217865 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" path="/var/lib/kubelet/pods/6d579e1a-3b27-4c1f-9175-42ac58490d42/volumes" Aug 13 20:30:54 crc kubenswrapper[4183]: I0813 20:30:54.803495 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:30:54 crc kubenswrapper[4183]: I0813 20:30:54.804074 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:30:54 crc kubenswrapper[4183]: I0813 20:30:54.804179 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:30:54 crc kubenswrapper[4183]: I0813 20:30:54.804222 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:30:54 crc kubenswrapper[4183]: I0813 20:30:54.804256 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:30:56 crc kubenswrapper[4183]: I0813 20:30:56.527744 4183 scope.go:117] "RemoveContainer" containerID="a00abbf09803bc3f3a22a86887914ba2fa3026aff021086131cdf33906d7fb2c" Aug 13 20:31:54 crc kubenswrapper[4183]: I0813 20:31:54.805259 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:31:54 crc kubenswrapper[4183]: I0813 20:31:54.806196 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:31:54 crc kubenswrapper[4183]: I0813 20:31:54.806303 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:31:54 crc kubenswrapper[4183]: I0813 20:31:54.806341 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:31:54 crc kubenswrapper[4183]: I0813 20:31:54.806378 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:32:54 crc kubenswrapper[4183]: I0813 20:32:54.807668 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:32:54 crc kubenswrapper[4183]: I0813 20:32:54.808421 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:32:54 crc kubenswrapper[4183]: I0813 20:32:54.808465 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:32:54 crc kubenswrapper[4183]: I0813 20:32:54.808514 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:32:54 crc kubenswrapper[4183]: I0813 20:32:54.808615 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:33:54 crc kubenswrapper[4183]: I0813 20:33:54.809699 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:33:54 crc kubenswrapper[4183]: I0813 20:33:54.810371 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:33:54 crc kubenswrapper[4183]: I0813 20:33:54.810430 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:33:54 crc kubenswrapper[4183]: I0813 20:33:54.810472 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:33:54 crc kubenswrapper[4183]: I0813 20:33:54.810521 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:34:54 crc kubenswrapper[4183]: I0813 20:34:54.810974 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:34:54 crc kubenswrapper[4183]: I0813 20:34:54.811990 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:34:54 crc kubenswrapper[4183]: I0813 20:34:54.812054 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:34:54 crc kubenswrapper[4183]: I0813 20:34:54.812164 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:34:54 crc kubenswrapper[4183]: I0813 20:34:54.812235 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:35:54 crc kubenswrapper[4183]: I0813 20:35:54.813302 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:35:54 crc kubenswrapper[4183]: I0813 20:35:54.813971 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:35:54 crc kubenswrapper[4183]: I0813 20:35:54.814025 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:35:54 crc kubenswrapper[4183]: I0813 20:35:54.814174 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:35:54 crc kubenswrapper[4183]: I0813 20:35:54.814227 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:36:54 crc kubenswrapper[4183]: I0813 20:36:54.815418 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:36:54 crc kubenswrapper[4183]: I0813 20:36:54.816161 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:36:54 crc kubenswrapper[4183]: I0813 20:36:54.816230 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:36:54 crc kubenswrapper[4183]: I0813 20:36:54.816266 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:36:54 crc kubenswrapper[4183]: I0813 20:36:54.816304 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.226038 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.227116 4183 topology_manager.go:215] "Topology Admit Handler" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" podNamespace="openshift-marketplace" podName="redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: E0813 20:37:48.227465 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.227489 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" Aug 13 20:37:48 crc kubenswrapper[4183]: E0813 20:37:48.227519 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="extract-utilities" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.227529 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="extract-utilities" Aug 13 20:37:48 crc kubenswrapper[4183]: E0813 20:37:48.227576 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="extract-content" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.227589 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="extract-content" Aug 13 20:37:48 crc kubenswrapper[4183]: E0813 20:37:48.227600 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ad171c4b-8408-4370-8e86-502999788ddb" containerName="collect-profiles" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.227610 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad171c4b-8408-4370-8e86-502999788ddb" containerName="collect-profiles" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.231919 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.231972 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad171c4b-8408-4370-8e86-502999788ddb" containerName="collect-profiles" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.233395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.272736 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.360000 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.360188 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gcn9\" (UniqueName: \"kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.360524 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.462502 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.463115 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9gcn9\" (UniqueName: \"kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.463353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.464352 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.464448 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.493262 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gcn9\" (UniqueName: \"kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.563669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.897981 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:37:49 crc kubenswrapper[4183]: I0813 20:37:49.610098 4183 generic.go:334] "Generic (PLEG): container finished" podID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerID="380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d" exitCode=0 Aug 13 20:37:49 crc kubenswrapper[4183]: I0813 20:37:49.610182 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerDied","Data":"380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d"} Aug 13 20:37:49 crc kubenswrapper[4183]: I0813 20:37:49.610530 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerStarted","Data":"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973"} Aug 13 20:37:49 crc kubenswrapper[4183]: I0813 20:37:49.614029 4183 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Aug 13 20:37:50 crc kubenswrapper[4183]: I0813 20:37:50.621086 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerStarted","Data":"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee"} Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.659569 4183 generic.go:334] "Generic (PLEG): container finished" podID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerID="1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee" exitCode=0 Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.660074 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerDied","Data":"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee"} Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.816871 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.816963 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.817010 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.817053 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.817088 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:37:55 crc kubenswrapper[4183]: I0813 20:37:55.670764 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerStarted","Data":"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922"} Aug 13 20:37:58 crc kubenswrapper[4183]: I0813 20:37:58.565755 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:58 crc kubenswrapper[4183]: I0813 20:37:58.566326 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:58 crc kubenswrapper[4183]: I0813 20:37:58.676409 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:58 crc kubenswrapper[4183]: I0813 20:37:58.705440 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nkzlk" podStartSLOduration=5.288354689 podStartE2EDuration="10.705385893s" podCreationTimestamp="2025-08-13 20:37:48 +0000 UTC" firstStartedPulling="2025-08-13 20:37:49.613412649 +0000 UTC m=+3236.306077307" lastFinishedPulling="2025-08-13 20:37:55.030443883 +0000 UTC m=+3241.723108511" observedRunningTime="2025-08-13 20:37:56.514890851 +0000 UTC m=+3243.207556409" watchObservedRunningTime="2025-08-13 20:37:58.705385893 +0000 UTC m=+3245.398050771" Aug 13 20:38:08 crc kubenswrapper[4183]: I0813 20:38:08.683194 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:38:08 crc kubenswrapper[4183]: I0813 20:38:08.749777 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:38:08 crc kubenswrapper[4183]: I0813 20:38:08.764345 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nkzlk" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="registry-server" containerID="cri-o://8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922" gracePeriod=2 Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.176666 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.217983 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gcn9\" (UniqueName: \"kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9\") pod \"afc02c17-9714-426d-aafa-ee58c673ab0c\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.218293 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content\") pod \"afc02c17-9714-426d-aafa-ee58c673ab0c\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.218355 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities\") pod \"afc02c17-9714-426d-aafa-ee58c673ab0c\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.219426 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities" (OuterVolumeSpecName: "utilities") pod "afc02c17-9714-426d-aafa-ee58c673ab0c" (UID: "afc02c17-9714-426d-aafa-ee58c673ab0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.226278 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9" (OuterVolumeSpecName: "kube-api-access-9gcn9") pod "afc02c17-9714-426d-aafa-ee58c673ab0c" (UID: "afc02c17-9714-426d-aafa-ee58c673ab0c"). InnerVolumeSpecName "kube-api-access-9gcn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.320361 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.320929 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9gcn9\" (UniqueName: \"kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.366919 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afc02c17-9714-426d-aafa-ee58c673ab0c" (UID: "afc02c17-9714-426d-aafa-ee58c673ab0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.422616 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.776026 4183 generic.go:334] "Generic (PLEG): container finished" podID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerID="8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922" exitCode=0 Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.776115 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.776194 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerDied","Data":"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922"} Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.777248 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerDied","Data":"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973"} Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.777285 4183 scope.go:117] "RemoveContainer" containerID="8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.829179 4183 scope.go:117] "RemoveContainer" containerID="1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.866063 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.875230 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.883982 4183 scope.go:117] "RemoveContainer" containerID="380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.922409 4183 scope.go:117] "RemoveContainer" containerID="8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922" Aug 13 20:38:09 crc kubenswrapper[4183]: E0813 20:38:09.923230 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922\": container with ID starting with 8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922 not found: ID does not exist" containerID="8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.923304 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922"} err="failed to get container status \"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922\": rpc error: code = NotFound desc = could not find container \"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922\": container with ID starting with 8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922 not found: ID does not exist" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.923319 4183 scope.go:117] "RemoveContainer" containerID="1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee" Aug 13 20:38:09 crc kubenswrapper[4183]: E0813 20:38:09.923941 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee\": container with ID starting with 1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee not found: ID does not exist" containerID="1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.923970 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee"} err="failed to get container status \"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee\": rpc error: code = NotFound desc = could not find container \"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee\": container with ID starting with 1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee not found: ID does not exist" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.923981 4183 scope.go:117] "RemoveContainer" containerID="380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d" Aug 13 20:38:09 crc kubenswrapper[4183]: E0813 20:38:09.925057 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d\": container with ID starting with 380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d not found: ID does not exist" containerID="380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.925250 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d"} err="failed to get container status \"380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d\": rpc error: code = NotFound desc = could not find container \"380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d\": container with ID starting with 380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d not found: ID does not exist" Aug 13 20:38:11 crc kubenswrapper[4183]: I0813 20:38:11.217764 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" path="/var/lib/kubelet/pods/afc02c17-9714-426d-aafa-ee58c673ab0c/volumes" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.093544 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.096217 4183 topology_manager.go:215] "Topology Admit Handler" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" podNamespace="openshift-marketplace" podName="certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: E0813 20:38:36.096659 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="registry-server" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.096835 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="registry-server" Aug 13 20:38:36 crc kubenswrapper[4183]: E0813 20:38:36.104025 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="extract-utilities" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.104087 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="extract-utilities" Aug 13 20:38:36 crc kubenswrapper[4183]: E0813 20:38:36.104122 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="extract-content" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.104129 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="extract-content" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.104443 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="registry-server" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.105518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.143532 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.203570 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.203656 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqlp7\" (UniqueName: \"kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.204094 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.305098 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.305560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bqlp7\" (UniqueName: \"kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.306221 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.307045 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.307051 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.340674 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqlp7\" (UniqueName: \"kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.431705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.809750 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.985997 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerStarted","Data":"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0"} Aug 13 20:38:37 crc kubenswrapper[4183]: I0813 20:38:37.994454 4183 generic.go:334] "Generic (PLEG): container finished" podID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerID="f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255" exitCode=0 Aug 13 20:38:37 crc kubenswrapper[4183]: I0813 20:38:37.994525 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerDied","Data":"f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255"} Aug 13 20:38:39 crc kubenswrapper[4183]: I0813 20:38:39.004230 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerStarted","Data":"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2"} Aug 13 20:38:44 crc kubenswrapper[4183]: I0813 20:38:44.041088 4183 generic.go:334] "Generic (PLEG): container finished" podID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerID="cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2" exitCode=0 Aug 13 20:38:44 crc kubenswrapper[4183]: I0813 20:38:44.041438 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerDied","Data":"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2"} Aug 13 20:38:45 crc kubenswrapper[4183]: I0813 20:38:45.050620 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerStarted","Data":"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b"} Aug 13 20:38:45 crc kubenswrapper[4183]: I0813 20:38:45.084743 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4kmbv" podStartSLOduration=2.689452066 podStartE2EDuration="9.084667311s" podCreationTimestamp="2025-08-13 20:38:36 +0000 UTC" firstStartedPulling="2025-08-13 20:38:37.996633082 +0000 UTC m=+3284.689297820" lastFinishedPulling="2025-08-13 20:38:44.391848357 +0000 UTC m=+3291.084513065" observedRunningTime="2025-08-13 20:38:45.080307175 +0000 UTC m=+3291.772971963" watchObservedRunningTime="2025-08-13 20:38:45.084667311 +0000 UTC m=+3291.777332029" Aug 13 20:38:46 crc kubenswrapper[4183]: I0813 20:38:46.432635 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:46 crc kubenswrapper[4183]: I0813 20:38:46.433566 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:46 crc kubenswrapper[4183]: I0813 20:38:46.551433 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:54 crc kubenswrapper[4183]: I0813 20:38:54.817852 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:38:54 crc kubenswrapper[4183]: I0813 20:38:54.818310 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:38:54 crc kubenswrapper[4183]: I0813 20:38:54.818423 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:38:54 crc kubenswrapper[4183]: I0813 20:38:54.818467 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:38:54 crc kubenswrapper[4183]: I0813 20:38:54.818514 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:38:56 crc kubenswrapper[4183]: I0813 20:38:56.564125 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:56 crc kubenswrapper[4183]: I0813 20:38:56.644422 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.141812 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4kmbv" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="registry-server" containerID="cri-o://4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b" gracePeriod=2 Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.533422 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.617319 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqlp7\" (UniqueName: \"kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7\") pod \"847e60dc-7a0a-4115-a7e1-356476e319e7\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.617553 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content\") pod \"847e60dc-7a0a-4115-a7e1-356476e319e7\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.617652 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities\") pod \"847e60dc-7a0a-4115-a7e1-356476e319e7\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.618960 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities" (OuterVolumeSpecName: "utilities") pod "847e60dc-7a0a-4115-a7e1-356476e319e7" (UID: "847e60dc-7a0a-4115-a7e1-356476e319e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.628370 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7" (OuterVolumeSpecName: "kube-api-access-bqlp7") pod "847e60dc-7a0a-4115-a7e1-356476e319e7" (UID: "847e60dc-7a0a-4115-a7e1-356476e319e7"). InnerVolumeSpecName "kube-api-access-bqlp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.719139 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.719228 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bqlp7\" (UniqueName: \"kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.842955 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "847e60dc-7a0a-4115-a7e1-356476e319e7" (UID: "847e60dc-7a0a-4115-a7e1-356476e319e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.921914 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.151335 4183 generic.go:334] "Generic (PLEG): container finished" podID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerID="4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b" exitCode=0 Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.151405 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerDied","Data":"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b"} Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.151452 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerDied","Data":"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0"} Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.151497 4183 scope.go:117] "RemoveContainer" containerID="4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.151628 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.199060 4183 scope.go:117] "RemoveContainer" containerID="cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.240373 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.246222 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.267919 4183 scope.go:117] "RemoveContainer" containerID="f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.320226 4183 scope.go:117] "RemoveContainer" containerID="4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b" Aug 13 20:38:58 crc kubenswrapper[4183]: E0813 20:38:58.321862 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b\": container with ID starting with 4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b not found: ID does not exist" containerID="4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.321944 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b"} err="failed to get container status \"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b\": rpc error: code = NotFound desc = could not find container \"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b\": container with ID starting with 4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b not found: ID does not exist" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.321968 4183 scope.go:117] "RemoveContainer" containerID="cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2" Aug 13 20:38:58 crc kubenswrapper[4183]: E0813 20:38:58.322957 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2\": container with ID starting with cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2 not found: ID does not exist" containerID="cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.323051 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2"} err="failed to get container status \"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2\": rpc error: code = NotFound desc = could not find container \"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2\": container with ID starting with cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2 not found: ID does not exist" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.323071 4183 scope.go:117] "RemoveContainer" containerID="f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255" Aug 13 20:38:58 crc kubenswrapper[4183]: E0813 20:38:58.323851 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255\": container with ID starting with f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255 not found: ID does not exist" containerID="f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.323918 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255"} err="failed to get container status \"f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255\": rpc error: code = NotFound desc = could not find container \"f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255\": container with ID starting with f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255 not found: ID does not exist" Aug 13 20:38:59 crc kubenswrapper[4183]: I0813 20:38:59.221999 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" path="/var/lib/kubelet/pods/847e60dc-7a0a-4115-a7e1-356476e319e7/volumes" Aug 13 20:39:54 crc kubenswrapper[4183]: I0813 20:39:54.819395 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:39:54 crc kubenswrapper[4183]: I0813 20:39:54.820101 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:39:54 crc kubenswrapper[4183]: I0813 20:39:54.820237 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:39:54 crc kubenswrapper[4183]: I0813 20:39:54.820279 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:39:54 crc kubenswrapper[4183]: I0813 20:39:54.820312 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:40:54 crc kubenswrapper[4183]: I0813 20:40:54.821089 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:40:54 crc kubenswrapper[4183]: I0813 20:40:54.821872 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:40:54 crc kubenswrapper[4183]: I0813 20:40:54.821940 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:40:54 crc kubenswrapper[4183]: I0813 20:40:54.821984 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:40:54 crc kubenswrapper[4183]: I0813 20:40:54.822014 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.457733 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.458497 4183 topology_manager.go:215] "Topology Admit Handler" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" podNamespace="openshift-marketplace" podName="redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: E0813 20:41:21.458870 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="registry-server" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.458891 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="registry-server" Aug 13 20:41:21 crc kubenswrapper[4183]: E0813 20:41:21.458911 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="extract-content" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.458919 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="extract-content" Aug 13 20:41:21 crc kubenswrapper[4183]: E0813 20:41:21.458935 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="extract-utilities" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.458943 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="extract-utilities" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.459099 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="registry-server" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.463161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.560744 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.638564 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.638643 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.638712 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shhm9\" (UniqueName: \"kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.740072 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.740153 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.740263 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-shhm9\" (UniqueName: \"kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.741100 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.741155 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.775996 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-shhm9\" (UniqueName: \"kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.813097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:22 crc kubenswrapper[4183]: I0813 20:41:22.212454 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:41:23 crc kubenswrapper[4183]: I0813 20:41:23.138668 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerDied","Data":"97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54"} Aug 13 20:41:23 crc kubenswrapper[4183]: I0813 20:41:23.140092 4183 generic.go:334] "Generic (PLEG): container finished" podID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerID="97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54" exitCode=0 Aug 13 20:41:23 crc kubenswrapper[4183]: I0813 20:41:23.140278 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerStarted","Data":"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c"} Aug 13 20:41:24 crc kubenswrapper[4183]: I0813 20:41:24.153949 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerStarted","Data":"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42"} Aug 13 20:41:48 crc kubenswrapper[4183]: I0813 20:41:48.416680 4183 generic.go:334] "Generic (PLEG): container finished" podID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerID="23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42" exitCode=0 Aug 13 20:41:48 crc kubenswrapper[4183]: I0813 20:41:48.417522 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerDied","Data":"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42"} Aug 13 20:41:50 crc kubenswrapper[4183]: I0813 20:41:50.435617 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerStarted","Data":"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0"} Aug 13 20:41:51 crc kubenswrapper[4183]: I0813 20:41:51.814499 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:51 crc kubenswrapper[4183]: I0813 20:41:51.814605 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:52 crc kubenswrapper[4183]: I0813 20:41:52.942710 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k2tgr" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" probeResult="failure" output=< Aug 13 20:41:52 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:41:52 crc kubenswrapper[4183]: > Aug 13 20:41:54 crc kubenswrapper[4183]: I0813 20:41:54.822617 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:41:54 crc kubenswrapper[4183]: I0813 20:41:54.823133 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:41:54 crc kubenswrapper[4183]: I0813 20:41:54.823185 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:41:54 crc kubenswrapper[4183]: I0813 20:41:54.823259 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:41:54 crc kubenswrapper[4183]: I0813 20:41:54.823299 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:42:02 crc kubenswrapper[4183]: I0813 20:42:02.939416 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k2tgr" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" probeResult="failure" output=< Aug 13 20:42:02 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:42:02 crc kubenswrapper[4183]: > Aug 13 20:42:11 crc kubenswrapper[4183]: I0813 20:42:11.984442 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:42:12 crc kubenswrapper[4183]: I0813 20:42:12.028486 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k2tgr" podStartSLOduration=25.310193169 podStartE2EDuration="51.028422928s" podCreationTimestamp="2025-08-13 20:41:21 +0000 UTC" firstStartedPulling="2025-08-13 20:41:23.140881353 +0000 UTC m=+3449.833546071" lastFinishedPulling="2025-08-13 20:41:48.859111222 +0000 UTC m=+3475.551775830" observedRunningTime="2025-08-13 20:41:50.480344302 +0000 UTC m=+3477.173009280" watchObservedRunningTime="2025-08-13 20:42:12.028422928 +0000 UTC m=+3498.721087656" Aug 13 20:42:12 crc kubenswrapper[4183]: I0813 20:42:12.100927 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:42:12 crc kubenswrapper[4183]: I0813 20:42:12.176489 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:42:13 crc kubenswrapper[4183]: I0813 20:42:13.263240 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:42:13 crc kubenswrapper[4183]: I0813 20:42:13.587508 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k2tgr" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" containerID="cri-o://d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0" gracePeriod=2 Aug 13 20:42:13 crc kubenswrapper[4183]: I0813 20:42:13.985208 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.243675 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.329446 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shhm9\" (UniqueName: \"kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9\") pod \"58e4f786-ee2a-45c4-83a4-523611d1eccd\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.329529 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities\") pod \"58e4f786-ee2a-45c4-83a4-523611d1eccd\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.329562 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content\") pod \"58e4f786-ee2a-45c4-83a4-523611d1eccd\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.330725 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities" (OuterVolumeSpecName: "utilities") pod "58e4f786-ee2a-45c4-83a4-523611d1eccd" (UID: "58e4f786-ee2a-45c4-83a4-523611d1eccd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.346140 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9" (OuterVolumeSpecName: "kube-api-access-shhm9") pod "58e4f786-ee2a-45c4-83a4-523611d1eccd" (UID: "58e4f786-ee2a-45c4-83a4-523611d1eccd"). InnerVolumeSpecName "kube-api-access-shhm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.431373 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.431440 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-shhm9\" (UniqueName: \"kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9\") on node \"crc\" DevicePath \"\"" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.622657 4183 generic.go:334] "Generic (PLEG): container finished" podID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerID="d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0" exitCode=0 Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.622712 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerDied","Data":"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0"} Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.622765 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerDied","Data":"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c"} Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.622852 4183 scope.go:117] "RemoveContainer" containerID="d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.623034 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.791096 4183 scope.go:117] "RemoveContainer" containerID="23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.903231 4183 scope.go:117] "RemoveContainer" containerID="97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.973171 4183 scope.go:117] "RemoveContainer" containerID="d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0" Aug 13 20:42:14 crc kubenswrapper[4183]: E0813 20:42:14.974453 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0\": container with ID starting with d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0 not found: ID does not exist" containerID="d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.974568 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0"} err="failed to get container status \"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0\": rpc error: code = NotFound desc = could not find container \"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0\": container with ID starting with d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0 not found: ID does not exist" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.974596 4183 scope.go:117] "RemoveContainer" containerID="23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42" Aug 13 20:42:14 crc kubenswrapper[4183]: E0813 20:42:14.975768 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42\": container with ID starting with 23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42 not found: ID does not exist" containerID="23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.976375 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42"} err="failed to get container status \"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42\": rpc error: code = NotFound desc = could not find container \"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42\": container with ID starting with 23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42 not found: ID does not exist" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.976404 4183 scope.go:117] "RemoveContainer" containerID="97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54" Aug 13 20:42:14 crc kubenswrapper[4183]: E0813 20:42:14.977560 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54\": container with ID starting with 97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54 not found: ID does not exist" containerID="97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.977600 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54"} err="failed to get container status \"97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54\": rpc error: code = NotFound desc = could not find container \"97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54\": container with ID starting with 97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54 not found: ID does not exist" Aug 13 20:42:15 crc kubenswrapper[4183]: I0813 20:42:15.279549 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58e4f786-ee2a-45c4-83a4-523611d1eccd" (UID: "58e4f786-ee2a-45c4-83a4-523611d1eccd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:42:15 crc kubenswrapper[4183]: I0813 20:42:15.345759 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:42:15 crc kubenswrapper[4183]: I0813 20:42:15.645911 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:42:15 crc kubenswrapper[4183]: I0813 20:42:15.671541 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:42:16 crc kubenswrapper[4183]: I0813 20:42:16.591921 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:42:17 crc kubenswrapper[4183]: I0813 20:42:17.218922 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" path="/var/lib/kubelet/pods/58e4f786-ee2a-45c4-83a4-523611d1eccd/volumes" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.022059 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.022931 4183 topology_manager.go:215] "Topology Admit Handler" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" podNamespace="openshift-marketplace" podName="community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: E0813 20:42:26.023252 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="extract-content" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.023293 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="extract-content" Aug 13 20:42:26 crc kubenswrapper[4183]: E0813 20:42:26.023313 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="extract-utilities" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.023325 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="extract-utilities" Aug 13 20:42:26 crc kubenswrapper[4183]: E0813 20:42:26.023345 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.023355 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.023548 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.033492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.042188 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.209469 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.210951 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.211019 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.312196 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.312307 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.312335 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.313570 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.313883 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.356133 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.889621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:27 crc kubenswrapper[4183]: I0813 20:42:27.900601 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Aug 13 20:42:28 crc kubenswrapper[4183]: I0813 20:42:28.727615 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerStarted","Data":"b4ce7c1e13297d1e3743efaf9f1064544bf90f65fb0b7a8fecd420a76ed2a73a"} Aug 13 20:42:31 crc kubenswrapper[4183]: I0813 20:42:31.758640 4183 generic.go:334] "Generic (PLEG): container finished" podID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerID="821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f" exitCode=0 Aug 13 20:42:31 crc kubenswrapper[4183]: I0813 20:42:31.758743 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerDied","Data":"821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f"} Aug 13 20:42:34 crc systemd[1]: Stopping Kubernetes Kubelet... Aug 13 20:42:34 crc kubenswrapper[4183]: I0813 20:42:34.901075 4183 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Aug 13 20:42:34 crc systemd[1]: kubelet.service: Deactivated successfully. Aug 13 20:42:34 crc systemd[1]: Stopped Kubernetes Kubelet. Aug 13 20:42:34 crc systemd[1]: kubelet.service: Consumed 9min 48.169s CPU time. -- Boot f3184b53def340458c4e6960b677da38 -- Dec 03 00:02:09 crc systemd[1]: Starting Kubernetes Kubelet... Dec 03 00:02:10 crc kubenswrapper[2988]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 00:02:10 crc kubenswrapper[2988]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 00:02:10 crc kubenswrapper[2988]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 00:02:10 crc kubenswrapper[2988]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 00:02:10 crc kubenswrapper[2988]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 00:02:10 crc kubenswrapper[2988]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.357067 2988 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360394 2988 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360424 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360432 2988 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360439 2988 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360446 2988 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360455 2988 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360463 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360470 2988 feature_gate.go:227] unrecognized feature gate: ImagePolicy Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360477 2988 feature_gate.go:227] unrecognized feature gate: MetricsServer Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360484 2988 feature_gate.go:227] unrecognized feature gate: NewOLM Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360491 2988 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360498 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360505 2988 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360512 2988 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360519 2988 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360526 2988 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360533 2988 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360541 2988 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360549 2988 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360556 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360562 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360574 2988 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360582 2988 feature_gate.go:227] unrecognized feature gate: SignatureStores Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360588 2988 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360595 2988 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360602 2988 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360609 2988 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360616 2988 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360623 2988 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360629 2988 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360636 2988 feature_gate.go:227] unrecognized feature gate: PinnedImages Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360643 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360650 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360657 2988 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360664 2988 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360670 2988 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360677 2988 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360686 2988 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360694 2988 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360702 2988 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360709 2988 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360717 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360725 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360732 2988 feature_gate.go:227] unrecognized feature gate: Example Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360739 2988 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360746 2988 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360753 2988 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360760 2988 feature_gate.go:227] unrecognized feature gate: GatewayAPI Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360767 2988 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360774 2988 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360808 2988 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360816 2988 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360823 2988 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360831 2988 feature_gate.go:227] unrecognized feature gate: InsightsConfig Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360839 2988 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360847 2988 feature_gate.go:227] unrecognized feature gate: PlatformOperators Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360854 2988 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360861 2988 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360870 2988 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.360878 2988 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366210 2988 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366254 2988 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366268 2988 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366276 2988 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366285 2988 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366291 2988 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366300 2988 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366307 2988 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366313 2988 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366320 2988 flags.go:64] FLAG: --azure-container-registry-config="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366326 2988 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366332 2988 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366339 2988 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366345 2988 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366351 2988 flags.go:64] FLAG: --cgroup-root="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366357 2988 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366363 2988 flags.go:64] FLAG: --client-ca-file="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366369 2988 flags.go:64] FLAG: --cloud-config="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366375 2988 flags.go:64] FLAG: --cloud-provider="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366381 2988 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366388 2988 flags.go:64] FLAG: --cluster-domain="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366394 2988 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366402 2988 flags.go:64] FLAG: --config-dir="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366408 2988 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366415 2988 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366423 2988 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366430 2988 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366436 2988 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366443 2988 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366450 2988 flags.go:64] FLAG: --contention-profiling="false" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366456 2988 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366462 2988 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366468 2988 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366475 2988 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366483 2988 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366489 2988 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366495 2988 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366501 2988 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366507 2988 flags.go:64] FLAG: --enable-server="true" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366513 2988 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366522 2988 flags.go:64] FLAG: --event-burst="100" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366529 2988 flags.go:64] FLAG: --event-qps="50" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366535 2988 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366541 2988 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366547 2988 flags.go:64] FLAG: --eviction-hard="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366554 2988 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366560 2988 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366565 2988 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366571 2988 flags.go:64] FLAG: --eviction-soft="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366577 2988 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366583 2988 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366589 2988 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366595 2988 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366600 2988 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366606 2988 flags.go:64] FLAG: --feature-gates="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366613 2988 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366620 2988 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366626 2988 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366632 2988 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366639 2988 flags.go:64] FLAG: --healthz-port="10248" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366645 2988 flags.go:64] FLAG: --help="false" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366652 2988 flags.go:64] FLAG: --hostname-override="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366658 2988 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366663 2988 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366669 2988 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366675 2988 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366681 2988 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366687 2988 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366692 2988 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366699 2988 flags.go:64] FLAG: --iptables-drop-bit="15" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366704 2988 flags.go:64] FLAG: --iptables-masquerade-bit="14" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366710 2988 flags.go:64] FLAG: --keep-terminated-pod-volumes="false" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366716 2988 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366722 2988 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366727 2988 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366734 2988 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366740 2988 flags.go:64] FLAG: --kube-reserved="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366746 2988 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366751 2988 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366758 2988 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366763 2988 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366769 2988 flags.go:64] FLAG: --lock-file="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366774 2988 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366780 2988 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366786 2988 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366795 2988 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366801 2988 flags.go:64] FLAG: --logging-format="text" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366807 2988 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366814 2988 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366820 2988 flags.go:64] FLAG: --manifest-url="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366826 2988 flags.go:64] FLAG: --manifest-url-header="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366834 2988 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366842 2988 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366849 2988 flags.go:64] FLAG: --max-pods="110" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366855 2988 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366862 2988 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366868 2988 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366874 2988 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366880 2988 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366886 2988 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366892 2988 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366906 2988 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366912 2988 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366918 2988 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366924 2988 flags.go:64] FLAG: --pod-cidr="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366931 2988 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce0319702e115e7248d135e58342ccf3f458e19c39e86dc8e79036f578ce80a4" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366941 2988 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366947 2988 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366953 2988 flags.go:64] FLAG: --pods-per-core="0" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366959 2988 flags.go:64] FLAG: --port="10250" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366965 2988 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366971 2988 flags.go:64] FLAG: --provider-id="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366977 2988 flags.go:64] FLAG: --qos-reserved="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366982 2988 flags.go:64] FLAG: --read-only-port="10255" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366988 2988 flags.go:64] FLAG: --register-node="true" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.366994 2988 flags.go:64] FLAG: --register-schedulable="true" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367000 2988 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367014 2988 flags.go:64] FLAG: --registry-burst="10" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367020 2988 flags.go:64] FLAG: --registry-qps="5" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367028 2988 flags.go:64] FLAG: --reserved-cpus="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367034 2988 flags.go:64] FLAG: --reserved-memory="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367042 2988 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367048 2988 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367054 2988 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367060 2988 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367065 2988 flags.go:64] FLAG: --runonce="false" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367071 2988 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367077 2988 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367084 2988 flags.go:64] FLAG: --seccomp-default="false" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367089 2988 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367095 2988 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367101 2988 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367107 2988 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367113 2988 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367119 2988 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367125 2988 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367131 2988 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367137 2988 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367143 2988 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367172 2988 flags.go:64] FLAG: --system-cgroups="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367183 2988 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367194 2988 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367200 2988 flags.go:64] FLAG: --tls-cert-file="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367206 2988 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367213 2988 flags.go:64] FLAG: --tls-min-version="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367218 2988 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367224 2988 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367230 2988 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367235 2988 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367242 2988 flags.go:64] FLAG: --v="2" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367249 2988 flags.go:64] FLAG: --version="false" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367257 2988 flags.go:64] FLAG: --vmodule="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367264 2988 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367271 2988 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367350 2988 feature_gate.go:227] unrecognized feature gate: NewOLM Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367358 2988 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367365 2988 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367372 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367379 2988 feature_gate.go:227] unrecognized feature gate: ImagePolicy Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367386 2988 feature_gate.go:227] unrecognized feature gate: MetricsServer Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367393 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367399 2988 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367406 2988 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367412 2988 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367420 2988 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367427 2988 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367434 2988 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367441 2988 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367448 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367455 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367500 2988 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367507 2988 feature_gate.go:227] unrecognized feature gate: SignatureStores Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367517 2988 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367524 2988 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367531 2988 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367539 2988 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367546 2988 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367554 2988 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367562 2988 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367569 2988 feature_gate.go:227] unrecognized feature gate: PinnedImages Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367577 2988 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367584 2988 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367591 2988 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367598 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367605 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367613 2988 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367620 2988 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367628 2988 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367635 2988 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367642 2988 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367650 2988 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367657 2988 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367664 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367671 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367679 2988 feature_gate.go:227] unrecognized feature gate: Example Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367686 2988 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367693 2988 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367701 2988 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367708 2988 feature_gate.go:227] unrecognized feature gate: GatewayAPI Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367715 2988 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367722 2988 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367730 2988 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367737 2988 feature_gate.go:227] unrecognized feature gate: InsightsConfig Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367744 2988 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367754 2988 feature_gate.go:227] unrecognized feature gate: PlatformOperators Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367761 2988 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367769 2988 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367776 2988 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367783 2988 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367794 2988 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367803 2988 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367811 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367818 2988 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.367826 2988 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.367834 2988 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.382557 2988 server.go:487] "Kubelet version" kubeletVersion="v1.29.5+29c95f3" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.382616 2988 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382687 2988 feature_gate.go:227] unrecognized feature gate: SignatureStores Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382701 2988 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382714 2988 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382726 2988 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382737 2988 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382747 2988 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382759 2988 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382770 2988 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382781 2988 feature_gate.go:227] unrecognized feature gate: PinnedImages Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382793 2988 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382804 2988 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382814 2988 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382826 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382837 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382849 2988 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382860 2988 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382871 2988 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382882 2988 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382893 2988 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382904 2988 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382915 2988 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382926 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382937 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382949 2988 feature_gate.go:227] unrecognized feature gate: Example Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382960 2988 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382972 2988 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382982 2988 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.382993 2988 feature_gate.go:227] unrecognized feature gate: GatewayAPI Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383004 2988 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383015 2988 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383026 2988 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383039 2988 feature_gate.go:227] unrecognized feature gate: InsightsConfig Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383050 2988 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383062 2988 feature_gate.go:227] unrecognized feature gate: PlatformOperators Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383074 2988 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383084 2988 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383095 2988 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383106 2988 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383117 2988 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383129 2988 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383140 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383174 2988 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383186 2988 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383196 2988 feature_gate.go:227] unrecognized feature gate: NewOLM Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383207 2988 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383218 2988 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383229 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383240 2988 feature_gate.go:227] unrecognized feature gate: ImagePolicy Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383251 2988 feature_gate.go:227] unrecognized feature gate: MetricsServer Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383262 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383274 2988 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383284 2988 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383295 2988 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383308 2988 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383319 2988 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383331 2988 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383343 2988 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383354 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383365 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383376 2988 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.383389 2988 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383554 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383567 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383578 2988 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383590 2988 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383601 2988 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383611 2988 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383623 2988 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383635 2988 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383646 2988 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383657 2988 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383668 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383679 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383690 2988 feature_gate.go:227] unrecognized feature gate: Example Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383701 2988 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383712 2988 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383723 2988 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383735 2988 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383747 2988 feature_gate.go:227] unrecognized feature gate: GatewayAPI Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383758 2988 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383769 2988 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383780 2988 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383791 2988 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383802 2988 feature_gate.go:227] unrecognized feature gate: InsightsConfig Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383813 2988 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383827 2988 feature_gate.go:227] unrecognized feature gate: PlatformOperators Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383838 2988 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383849 2988 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383860 2988 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383870 2988 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383881 2988 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383892 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383903 2988 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383914 2988 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383924 2988 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383936 2988 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383947 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383958 2988 feature_gate.go:227] unrecognized feature gate: ImagePolicy Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383968 2988 feature_gate.go:227] unrecognized feature gate: MetricsServer Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383981 2988 feature_gate.go:227] unrecognized feature gate: NewOLM Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.383992 2988 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384003 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384013 2988 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384025 2988 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384039 2988 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384053 2988 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384081 2988 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384099 2988 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384115 2988 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384132 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384146 2988 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384195 2988 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384211 2988 feature_gate.go:227] unrecognized feature gate: SignatureStores Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384226 2988 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384240 2988 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384255 2988 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384269 2988 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384284 2988 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384298 2988 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384312 2988 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.384325 2988 feature_gate.go:227] unrecognized feature gate: PinnedImages Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.384337 2988 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.384676 2988 server.go:925] "Client rotation is on, will bootstrap in background" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.392922 2988 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.395094 2988 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.395663 2988 server.go:982] "Starting client certificate rotation" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.395692 2988 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.395903 2988 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-05-13 12:41:49.140440841 +0000 UTC Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.395997 2988 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 3876h39m38.744447366s for next certificate rotation Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.411392 2988 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.417849 2988 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.443213 2988 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.460944 2988 remote_runtime.go:143] "Validated CRI v1 runtime API" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.461011 2988 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.588762 2988 remote_image.go:111] "Validated CRI v1 image API" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.592313 2988 fs.go:132] Filesystem UUIDs: map[2025-12-03-00-01-23-00:/dev/sr0 68d6f3e9-64e9-44a4-a1d0-311f9c629a01:/dev/vda4 6ea7ef63-bc43-49c4-9337-b3b14ffb2763:/dev/vda3 7B77-95E7:/dev/vda2] Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.592369 2988 fs.go:133] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.606428 2988 manager.go:217] Machine: {Timestamp:2025-12-03 00:02:10.604286177 +0000 UTC m=+1.060177474 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:c1bd596843fb445da20eca66471ddf66 SystemUUID:c007f178-aa0e-43e3-b9af-eea9bad4fb2f BootID:f3184b53-def3-4045-8c4e-6960b677da38 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85294297088 Type:vfs Inodes:41680320 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:90:12:e6 Speed:0 Mtu:1500} {Name:br-int MacAddress:4e:ec:11:72:80:3b Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:90:12:e6 Speed:-1 Mtu:1500} {Name:eth10 MacAddress:d2:e3:7a:f4:67:16 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:b6:dc:d9:26:03:d4 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:66:64:f2:9b:f6:ff Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.606607 2988 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.606723 2988 manager.go:233] Version: {KernelVersion:5.14.0-427.22.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 416.94.202406172220-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.614949 2988 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.615186 2988 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.615224 2988 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.615234 2988 container_manager_linux.go:304] "Creating device plugin manager" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.615432 2988 manager.go:136] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.615456 2988 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.615926 2988 state_mem.go:36] "Initialized new in-memory state store" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.616006 2988 server.go:1227] "Using root directory" path="/var/lib/kubelet" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.630272 2988 kubelet.go:406] "Attempting to sync node with API server" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.630371 2988 kubelet.go:311] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.630637 2988 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.630701 2988 kubelet.go:322] "Adding apiserver pod source" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.630988 2988 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.646609 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:10 crc kubenswrapper[2988]: E1203 00:02:10.646690 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.646603 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:10 crc kubenswrapper[2988]: E1203 00:02:10.646750 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.738518 2988 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="cri-o" version="1.29.5-5.rhaos4.16.git7032128.el9" apiVersion="v1" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.739291 2988 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.757272 2988 kubelet.go:826] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.757570 2988 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.757598 2988 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.757609 2988 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.757625 2988 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.757647 2988 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.757664 2988 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.757672 2988 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.757680 2988 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.757691 2988 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.757699 2988 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/cephfs" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.757712 2988 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.757721 2988 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.757730 2988 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.757742 2988 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.757750 2988 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.757773 2988 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.758238 2988 server.go:1262] "Started kubelet" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.758445 2988 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.760589 2988 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 00:02:10 crc systemd[1]: Started Kubernetes Kubelet. Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.772013 2988 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.772075 2988 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.772117 2988 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.777819 2988 volume_manager.go:289] "The desired_state_of_world populator starts" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.777845 2988 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.777791 2988 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-05-08 08:30:20.120189699 +0000 UTC Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.779700 2988 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 3752h28m9.340493525s for next certificate rotation Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.771578 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.782284 2988 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 03 00:02:10 crc kubenswrapper[2988]: W1203 00:02:10.782521 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:10 crc kubenswrapper[2988]: E1203 00:02:10.782575 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="200ms" Dec 03 00:02:10 crc kubenswrapper[2988]: E1203 00:02:10.782596 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:10 crc kubenswrapper[2988]: E1203 00:02:10.782952 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9be9d2fd1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,LastTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.783607 2988 server.go:461] "Adding debug handlers to kubelet server" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.784991 2988 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.785052 2988 factory.go:55] Registering systemd factory Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.785095 2988 factory.go:221] Registration of the systemd container factory successfully Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.785852 2988 factory.go:153] Registering CRI-O factory Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.785939 2988 factory.go:221] Registration of the crio container factory successfully Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.786016 2988 factory.go:103] Registering Raw factory Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.786075 2988 manager.go:1196] Started watching for new ooms in manager Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.786629 2988 manager.go:319] Starting recovery of all containers Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797744 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797775 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797787 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797798 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797811 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797821 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797832 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797842 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797861 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797872 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797883 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797909 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797921 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797932 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797943 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797953 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797974 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797985 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.797995 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798007 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798018 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798029 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798040 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a48baf-1bee-4921-8bb2-9b7320e76f79" volumeName="kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798051 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798065 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798076 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798089 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798099 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798109 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798120 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798130 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798140 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798167 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798178 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798203 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="12e733dd-0939-4f1b-9cbb-13897e093787" volumeName="kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798214 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798223 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798233 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798242 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798252 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798261 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798271 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798281 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798914 2988 reconstruct_new.go:149] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ea5f9a7192af1960ec8c50a86fd2d9a756dbf85695798868f611e04a03ec009/globalmount" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798934 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798946 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798956 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798967 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.798993 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799008 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799020 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799031 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799042 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799055 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799068 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799079 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799091 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799102 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799116 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799126 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799138 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799161 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799172 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799184 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799194 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799205 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799229 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799241 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799252 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799263 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799274 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799284 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0f40333-c860-4c04-8058-a0bf572dcf12" volumeName="kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799295 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799307 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799319 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799329 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799340 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799353 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799364 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799374 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799386 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799397 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799408 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799418 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799429 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799440 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799454 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799465 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799475 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799486 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799521 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799534 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799544 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799555 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799566 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799579 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799589 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799605 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799616 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799627 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799639 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799650 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799662 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799675 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799686 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799697 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799709 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799720 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799729 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799741 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799751 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799761 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799771 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799781 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799791 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799802 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799812 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799822 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf1a8966-f594-490a-9fbb-eec5bafd13d3" volumeName="kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799833 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799844 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799853 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799868 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799878 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799888 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799898 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799909 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799920 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799930 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799940 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799964 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799975 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799985 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.799994 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800004 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800014 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800027 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5d722a-1123-4935-9740-52a08d018bc9" volumeName="kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800037 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800047 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800056 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800066 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800077 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800086 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800097 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800106 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800116 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800127 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800160 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800171 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800181 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800190 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800201 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800212 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800222 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800232 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800242 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800252 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800262 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800272 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800283 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800293 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800303 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800313 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800323 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800333 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6268b7fe-8910-4505-b404-6f1df638105c" volumeName="kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800343 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800354 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800365 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800377 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800389 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800401 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800411 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800422 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800433 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800444 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800455 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800465 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800474 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800484 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800496 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800507 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800531 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800541 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800551 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800562 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800576 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800591 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800605 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800617 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800627 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800638 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800648 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800659 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800668 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800684 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800694 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800705 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800716 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800728 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800739 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800750 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800760 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a23c0ee-5648-448c-b772-83dced2891ce" volumeName="kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800770 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800781 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800791 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800802 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800812 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800822 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800832 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800846 2988 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn" seLinuxMountContext="" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800855 2988 reconstruct_new.go:102] "Volume reconstruction finished" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.800862 2988 reconciler_new.go:29] "Reconciler: start to sync state" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.806194 2988 manager.go:324] Recovery completed Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.878398 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.879913 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.880018 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.880087 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.880192 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.881991 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:10 crc kubenswrapper[2988]: E1203 00:02:10.882207 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.882848 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.882879 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.882891 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.884220 2988 cpu_manager.go:215] "Starting CPU manager" policy="none" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.884291 2988 cpu_manager.go:216] "Reconciling" reconcilePeriod="10s" Dec 03 00:02:10 crc kubenswrapper[2988]: I1203 00:02:10.884364 2988 state_mem.go:36] "Initialized new in-memory state store" Dec 03 00:02:10 crc kubenswrapper[2988]: E1203 00:02:10.985671 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="400ms" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.082923 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.085067 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.085127 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.085140 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.085193 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:02:11 crc kubenswrapper[2988]: E1203 00:02:11.091793 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.178337 2988 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.181357 2988 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.181402 2988 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.181433 2988 kubelet.go:2343] "Starting kubelet main sync loop" Dec 03 00:02:11 crc kubenswrapper[2988]: E1203 00:02:11.181511 2988 kubelet.go:2367] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 03 00:02:11 crc kubenswrapper[2988]: W1203 00:02:11.189142 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:11 crc kubenswrapper[2988]: E1203 00:02:11.189211 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.255817 2988 policy_none.go:49] "None policy: Start" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.257368 2988 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.257421 2988 state_mem.go:35] "Initializing new in-memory state store" Dec 03 00:02:11 crc kubenswrapper[2988]: E1203 00:02:11.282090 2988 kubelet.go:2367] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 03 00:02:11 crc kubenswrapper[2988]: E1203 00:02:11.387672 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="800ms" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.420572 2988 manager.go:296] "Starting Device Plugin manager" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.420679 2988 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.420699 2988 server.go:79] "Starting device plugin registration server" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.421465 2988 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.421599 2988 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.421620 2988 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.483409 2988 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.483530 2988 topology_manager.go:215] "Topology Admit Handler" podUID="b2a6a3b2ca08062d24afa4c01aaf9e4f" podNamespace="openshift-etcd" podName="etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.483646 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.485504 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.485549 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.485571 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.485693 2988 topology_manager.go:215] "Topology Admit Handler" podUID="ae85115fdc231b4002b57317b41a6400" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.485744 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.486343 2988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.486398 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.487232 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.487287 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.487301 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.487420 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.487475 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.487483 2988 topology_manager.go:215] "Topology Admit Handler" podUID="bd6a3a59e513625ca0ae3724df2686bc" podNamespace="openshift-kube-controller-manager" podName="kube-controller-manager-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.487499 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.487538 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.487760 2988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.487830 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.490656 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.490667 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.490681 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.490692 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.490693 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.490713 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.490813 2988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.490835 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.490857 2988 topology_manager.go:215] "Topology Admit Handler" podUID="6a57a7fb1944b43a6bd11a349520d301" podNamespace="openshift-kube-scheduler" podName="openshift-kube-scheduler-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.490935 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.491536 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.491625 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.491646 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.491856 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.491909 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.491862 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.491932 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.492128 2988 topology_manager.go:215] "Topology Admit Handler" podUID="d3ae206906481b4831fd849b559269c8" podNamespace="openshift-machine-config-operator" podName="kube-rbac-proxy-crio-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.492187 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.492252 2988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.492334 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.492882 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.492922 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.492941 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.493007 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.493043 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.493063 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.493096 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.493146 2988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.493258 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.494031 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.494096 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.494119 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.494131 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.494195 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.494214 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:11 crc kubenswrapper[2988]: E1203 00:02:11.494711 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.633126 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.633283 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.633400 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.634008 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.634271 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.634368 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.634448 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.635089 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.635202 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.635240 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.635298 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.635371 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.635402 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.635438 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.635467 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.738229 2988 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.738340 2988 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.738409 2988 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.738423 2988 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.738467 2988 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.738532 2988 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.738584 2988 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.738629 2988 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.738915 2988 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.738945 2988 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.738969 2988 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.738974 2988 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.739008 2988 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.739045 2988 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.739061 2988 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.739091 2988 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.739076 2988 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.739139 2988 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.739260 2988 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.739328 2988 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.739399 2988 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.739422 2988 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.739467 2988 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.739576 2988 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.739712 2988 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.739825 2988 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.739872 2988 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.739942 2988 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.740018 2988 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.740041 2988 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.781411 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:11 crc kubenswrapper[2988]: W1203 00:02:11.791058 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:11 crc kubenswrapper[2988]: E1203 00:02:11.791211 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.828347 2988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.855086 2988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.865368 2988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.894941 2988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:02:11 crc kubenswrapper[2988]: I1203 00:02:11.903734 2988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:02:12 crc kubenswrapper[2988]: W1203 00:02:12.092139 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:12 crc kubenswrapper[2988]: E1203 00:02:12.092253 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:12 crc kubenswrapper[2988]: W1203 00:02:12.115725 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:12 crc kubenswrapper[2988]: E1203 00:02:12.115848 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:12 crc kubenswrapper[2988]: E1203 00:02:12.190032 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="1.6s" Dec 03 00:02:12 crc kubenswrapper[2988]: W1203 00:02:12.216806 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:12 crc kubenswrapper[2988]: E1203 00:02:12.216872 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:12 crc kubenswrapper[2988]: I1203 00:02:12.295497 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:12 crc kubenswrapper[2988]: I1203 00:02:12.297147 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:12 crc kubenswrapper[2988]: I1203 00:02:12.297234 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:12 crc kubenswrapper[2988]: I1203 00:02:12.297254 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:12 crc kubenswrapper[2988]: I1203 00:02:12.297289 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:02:12 crc kubenswrapper[2988]: E1203 00:02:12.298725 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:02:12 crc kubenswrapper[2988]: W1203 00:02:12.342764 2988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a57a7fb1944b43a6bd11a349520d301.slice/crio-637aa1141c4e9dbcc9ab94029f1bfa5b2cf1e2748055ad82e47f0eb751df158f WatchSource:0}: Error finding container 637aa1141c4e9dbcc9ab94029f1bfa5b2cf1e2748055ad82e47f0eb751df158f: Status 404 returned error can't find the container with id 637aa1141c4e9dbcc9ab94029f1bfa5b2cf1e2748055ad82e47f0eb751df158f Dec 03 00:02:12 crc kubenswrapper[2988]: W1203 00:02:12.346614 2988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd6a3a59e513625ca0ae3724df2686bc.slice/crio-de9c4b16204ead259adb75371325dc4a16f8167198cb81b685903b2e2ecbfb15 WatchSource:0}: Error finding container de9c4b16204ead259adb75371325dc4a16f8167198cb81b685903b2e2ecbfb15: Status 404 returned error can't find the container with id de9c4b16204ead259adb75371325dc4a16f8167198cb81b685903b2e2ecbfb15 Dec 03 00:02:12 crc kubenswrapper[2988]: W1203 00:02:12.349255 2988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3ae206906481b4831fd849b559269c8.slice/crio-eb013460084969062e8c96e950d76786cf836a097e671ed6121692f5ec78aac1 WatchSource:0}: Error finding container eb013460084969062e8c96e950d76786cf836a097e671ed6121692f5ec78aac1: Status 404 returned error can't find the container with id eb013460084969062e8c96e950d76786cf836a097e671ed6121692f5ec78aac1 Dec 03 00:02:12 crc kubenswrapper[2988]: W1203 00:02:12.350984 2988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2a6a3b2ca08062d24afa4c01aaf9e4f.slice/crio-aa8841be3bb4423d66f3f82aec6f5f8599b4037b5193e3c1b87928782e90f15a WatchSource:0}: Error finding container aa8841be3bb4423d66f3f82aec6f5f8599b4037b5193e3c1b87928782e90f15a: Status 404 returned error can't find the container with id aa8841be3bb4423d66f3f82aec6f5f8599b4037b5193e3c1b87928782e90f15a Dec 03 00:02:12 crc kubenswrapper[2988]: W1203 00:02:12.351753 2988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae85115fdc231b4002b57317b41a6400.slice/crio-187f6710b29c7ff47676a03d4eb702431a7de523193306420d1f12631797586d WatchSource:0}: Error finding container 187f6710b29c7ff47676a03d4eb702431a7de523193306420d1f12631797586d: Status 404 returned error can't find the container with id 187f6710b29c7ff47676a03d4eb702431a7de523193306420d1f12631797586d Dec 03 00:02:12 crc kubenswrapper[2988]: I1203 00:02:12.781661 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:12 crc kubenswrapper[2988]: E1203 00:02:12.876728 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:02:13 crc kubenswrapper[2988]: I1203 00:02:13.188980 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"187f6710b29c7ff47676a03d4eb702431a7de523193306420d1f12631797586d"} Dec 03 00:02:13 crc kubenswrapper[2988]: I1203 00:02:13.190112 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"637aa1141c4e9dbcc9ab94029f1bfa5b2cf1e2748055ad82e47f0eb751df158f"} Dec 03 00:02:13 crc kubenswrapper[2988]: I1203 00:02:13.191499 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"aa8841be3bb4423d66f3f82aec6f5f8599b4037b5193e3c1b87928782e90f15a"} Dec 03 00:02:13 crc kubenswrapper[2988]: I1203 00:02:13.192623 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"de9c4b16204ead259adb75371325dc4a16f8167198cb81b685903b2e2ecbfb15"} Dec 03 00:02:13 crc kubenswrapper[2988]: I1203 00:02:13.193914 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"eb013460084969062e8c96e950d76786cf836a097e671ed6121692f5ec78aac1"} Dec 03 00:02:13 crc kubenswrapper[2988]: I1203 00:02:13.781933 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:13 crc kubenswrapper[2988]: E1203 00:02:13.791955 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="3.2s" Dec 03 00:02:13 crc kubenswrapper[2988]: I1203 00:02:13.899179 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:13 crc kubenswrapper[2988]: I1203 00:02:13.900684 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:13 crc kubenswrapper[2988]: I1203 00:02:13.900727 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:13 crc kubenswrapper[2988]: I1203 00:02:13.900743 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:13 crc kubenswrapper[2988]: I1203 00:02:13.900770 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:02:13 crc kubenswrapper[2988]: E1203 00:02:13.902380 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:02:13 crc kubenswrapper[2988]: W1203 00:02:13.927122 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:13 crc kubenswrapper[2988]: E1203 00:02:13.927254 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:14 crc kubenswrapper[2988]: W1203 00:02:14.385895 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:14 crc kubenswrapper[2988]: E1203 00:02:14.385992 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:14 crc kubenswrapper[2988]: W1203 00:02:14.445043 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:14 crc kubenswrapper[2988]: E1203 00:02:14.445132 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:14 crc kubenswrapper[2988]: W1203 00:02:14.476138 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:14 crc kubenswrapper[2988]: E1203 00:02:14.476290 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:14 crc kubenswrapper[2988]: I1203 00:02:14.782144 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:15 crc kubenswrapper[2988]: I1203 00:02:15.781758 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:16 crc kubenswrapper[2988]: I1203 00:02:16.781390 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:16 crc kubenswrapper[2988]: E1203 00:02:16.980610 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9be9d2fd1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,LastTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:02:16 crc kubenswrapper[2988]: E1203 00:02:16.994691 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="6.4s" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.103239 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.104894 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.104963 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.104984 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.105026 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:02:17 crc kubenswrapper[2988]: E1203 00:02:17.106387 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.214355 2988 generic.go:334] "Generic (PLEG): container finished" podID="6a57a7fb1944b43a6bd11a349520d301" containerID="a6f129d1e4e5351a1b916059a9ac5fdfac43baf7d5706d3e62923a11504388b0" exitCode=0 Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.214454 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.214496 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerDied","Data":"a6f129d1e4e5351a1b916059a9ac5fdfac43baf7d5706d3e62923a11504388b0"} Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.215961 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.215996 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.216009 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.216904 2988 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="b01f8517336b721c299e851a9919dd8120322d2aa6f743ec74cd3993077a5024" exitCode=0 Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.217016 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"b01f8517336b721c299e851a9919dd8120322d2aa6f743ec74cd3993077a5024"} Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.217025 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.217730 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.217766 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.217780 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.220551 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"665a34608d3a5605371e24e6290c46799dfd5f53a763906b4118fe26ca1d0bc8"} Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.220601 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"b19465ece4cec9ad010d717fc18f1c8d9207db686532fe862af9fd8ac1be6519"} Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.220623 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"3a7b7fb854ec1ecd679824d61213abb95dd4a1a4c624ae4a3998cda5f2de0438"} Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.222835 2988 generic.go:334] "Generic (PLEG): container finished" podID="d3ae206906481b4831fd849b559269c8" containerID="1633b8acfea47597b19d2733735a3ac40088c34f70eb0c58c03c74899e43bf6f" exitCode=0 Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.222908 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerDied","Data":"1633b8acfea47597b19d2733735a3ac40088c34f70eb0c58c03c74899e43bf6f"} Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.222965 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.224430 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.224475 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.224495 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.225863 2988 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="9bb1000c58bc7c4eea005309467de4797fb76ca9de00e270acf1bc87f9b83c45" exitCode=0 Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.225921 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerDied","Data":"9bb1000c58bc7c4eea005309467de4797fb76ca9de00e270acf1bc87f9b83c45"} Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.226060 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.229750 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.229869 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.229885 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.235223 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.237119 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.237176 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.237190 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:17 crc kubenswrapper[2988]: I1203 00:02:17.780921 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:17 crc kubenswrapper[2988]: W1203 00:02:17.836504 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:17 crc kubenswrapper[2988]: E1203 00:02:17.836599 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.230421 2988 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="fb568711fd46695d146d362d84fbaa3b90fcc7c06e8e1104eb655fab5c4d8bfd" exitCode=0 Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.230498 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.230553 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"fb568711fd46695d146d362d84fbaa3b90fcc7c06e8e1104eb655fab5c4d8bfd"} Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.231250 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.231279 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.231289 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.271610 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"2350ddc24a25cddc673aa0cf4f4fa698ac36dd61611d97ffa3996c6223f13982"} Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.271720 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.272807 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.272838 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.272848 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.274553 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"5f45a8e78c37d2ca51bdcb19b555c20cadd4dc0e37c6d4196c204dac6844e75c"} Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.274552 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.275371 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.275403 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.275417 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.278735 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"557fc6cd7e0f29b886d47058e8b38fb7651b704349348a199e472472afd9f559"} Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.278759 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"541d2c55015dd9598833b9963af2b6381cb6ee6b4d7dfc71628357dfa5061309"} Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.305956 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"4fe0c39c7534f058d4974bf4bde5662895ecc59c6553073f52f48f7fb9fba25d"} Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.305994 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"2c5bcf4a19959add7c59f51c10e505f15809226df15d29ebb329c18dd470264d"} Dec 03 00:02:18 crc kubenswrapper[2988]: I1203 00:02:18.782005 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:19 crc kubenswrapper[2988]: W1203 00:02:19.089390 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:19 crc kubenswrapper[2988]: E1203 00:02:19.089456 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.311265 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"8b7d1a71fb42c2b734b2de91015cb7a197949c7b4aca2f3eade37c41d44c78c3"} Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.311310 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"d50b8eb29a7c52ebdea06aab6550fa8d58962de54813379bf63765c09422ebd8"} Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.311323 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"78cb312fe6c14e0be87fca6e1a4b453d849031a170e3099a748a5dc1734bce20"} Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.311367 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.312449 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.312487 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.312505 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.314125 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"a6b460931d66c25bbec5d84b4593699e0cfaffdfc08fe32ce7c90ec0bf6545ca"} Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.314220 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.314927 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.314948 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.314959 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.316883 2988 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="89512a852dfaccad952d6c03075e8fac83d79c573f8d15ba731bdc70dc96f2e3" exitCode=0 Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.316940 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.316947 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"89512a852dfaccad952d6c03075e8fac83d79c573f8d15ba731bdc70dc96f2e3"} Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.317127 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.317267 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.317936 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.317960 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.317971 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.318540 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.318562 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.318573 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.320233 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.320266 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.320277 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.764143 2988 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.781512 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:19 crc kubenswrapper[2988]: I1203 00:02:19.961859 2988 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.017131 2988 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.334447 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.334489 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.334665 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.335277 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"211384be2e194ca67e43886d68bc0c0a36e33d03f467bdde2d451e2189474938"} Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.335383 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"704dc1e647965e8d93ee64d7d87f1117b086c8c8017e92b0c0695d8db5b0a0bb"} Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.335420 2988 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.335461 2988 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.335484 2988 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.336392 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.336393 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.336443 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.336446 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.336462 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.336473 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.336693 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.336849 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.336925 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:20 crc kubenswrapper[2988]: W1203 00:02:20.375366 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:20 crc kubenswrapper[2988]: E1203 00:02:20.375462 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:20 crc kubenswrapper[2988]: W1203 00:02:20.551114 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:20 crc kubenswrapper[2988]: E1203 00:02:20.551312 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.583313 2988 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:20 crc kubenswrapper[2988]: I1203 00:02:20.781265 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.341316 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.341316 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.341347 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.341339 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"37a603c1c009187a49547991233efcba1a29be9d746be9e05a86ed5292b3f53d"} Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.342001 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"8206b88157dc3ee0552e5fa2f98afff2c9ec64a6dd563900c8b8b921cc7ca9ce"} Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.341483 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.342957 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.342970 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.343007 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.343022 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.343020 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.342986 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.343052 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.343068 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.343072 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.343510 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.343533 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.343546 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:21 crc kubenswrapper[2988]: I1203 00:02:21.781847 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:22 crc kubenswrapper[2988]: I1203 00:02:22.343849 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:22 crc kubenswrapper[2988]: I1203 00:02:22.343851 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:22 crc kubenswrapper[2988]: I1203 00:02:22.350247 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:22 crc kubenswrapper[2988]: I1203 00:02:22.350303 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:22 crc kubenswrapper[2988]: I1203 00:02:22.350342 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:22 crc kubenswrapper[2988]: I1203 00:02:22.350351 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:22 crc kubenswrapper[2988]: I1203 00:02:22.350368 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:22 crc kubenswrapper[2988]: I1203 00:02:22.350375 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:22 crc kubenswrapper[2988]: I1203 00:02:22.781687 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:22 crc kubenswrapper[2988]: E1203 00:02:22.877205 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:02:23 crc kubenswrapper[2988]: E1203 00:02:23.396943 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:02:23 crc kubenswrapper[2988]: I1203 00:02:23.507336 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:23 crc kubenswrapper[2988]: I1203 00:02:23.508954 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:23 crc kubenswrapper[2988]: I1203 00:02:23.509037 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:23 crc kubenswrapper[2988]: I1203 00:02:23.509056 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:23 crc kubenswrapper[2988]: I1203 00:02:23.509094 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:02:23 crc kubenswrapper[2988]: E1203 00:02:23.510897 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:02:23 crc kubenswrapper[2988]: I1203 00:02:23.781530 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:24 crc kubenswrapper[2988]: I1203 00:02:24.781373 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:25 crc kubenswrapper[2988]: I1203 00:02:25.055133 2988 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:02:25 crc kubenswrapper[2988]: I1203 00:02:25.055420 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:25 crc kubenswrapper[2988]: I1203 00:02:25.056949 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:25 crc kubenswrapper[2988]: I1203 00:02:25.057026 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:25 crc kubenswrapper[2988]: I1203 00:02:25.057056 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:25 crc kubenswrapper[2988]: I1203 00:02:25.120423 2988 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:02:25 crc kubenswrapper[2988]: I1203 00:02:25.216414 2988 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 03 00:02:25 crc kubenswrapper[2988]: I1203 00:02:25.216638 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:25 crc kubenswrapper[2988]: I1203 00:02:25.218405 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:25 crc kubenswrapper[2988]: I1203 00:02:25.218538 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:25 crc kubenswrapper[2988]: I1203 00:02:25.218560 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:25 crc kubenswrapper[2988]: I1203 00:02:25.352932 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:25 crc kubenswrapper[2988]: I1203 00:02:25.354407 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:25 crc kubenswrapper[2988]: I1203 00:02:25.354464 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:25 crc kubenswrapper[2988]: I1203 00:02:25.354495 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:25 crc kubenswrapper[2988]: I1203 00:02:25.781280 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:26 crc kubenswrapper[2988]: I1203 00:02:26.782024 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:26 crc kubenswrapper[2988]: E1203 00:02:26.984052 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9be9d2fd1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,LastTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:02:27 crc kubenswrapper[2988]: I1203 00:02:27.169572 2988 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:02:27 crc kubenswrapper[2988]: I1203 00:02:27.169720 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:27 crc kubenswrapper[2988]: I1203 00:02:27.171221 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:27 crc kubenswrapper[2988]: I1203 00:02:27.171293 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:27 crc kubenswrapper[2988]: I1203 00:02:27.171312 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:27 crc kubenswrapper[2988]: W1203 00:02:27.250289 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:27 crc kubenswrapper[2988]: E1203 00:02:27.250432 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:27 crc kubenswrapper[2988]: I1203 00:02:27.712428 2988 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 03 00:02:27 crc kubenswrapper[2988]: I1203 00:02:27.712691 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:27 crc kubenswrapper[2988]: I1203 00:02:27.719091 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:27 crc kubenswrapper[2988]: I1203 00:02:27.719145 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:27 crc kubenswrapper[2988]: I1203 00:02:27.719205 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:27 crc kubenswrapper[2988]: I1203 00:02:27.781528 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:28 crc kubenswrapper[2988]: I1203 00:02:28.121523 2988 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:02:28 crc kubenswrapper[2988]: I1203 00:02:28.121747 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:02:28 crc kubenswrapper[2988]: W1203 00:02:28.325284 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:28 crc kubenswrapper[2988]: E1203 00:02:28.325427 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:28 crc kubenswrapper[2988]: I1203 00:02:28.780998 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:29 crc kubenswrapper[2988]: W1203 00:02:29.111601 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:29 crc kubenswrapper[2988]: E1203 00:02:29.111711 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:29 crc kubenswrapper[2988]: I1203 00:02:29.764603 2988 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:02:29 crc kubenswrapper[2988]: I1203 00:02:29.764755 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:02:29 crc kubenswrapper[2988]: I1203 00:02:29.782113 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:30 crc kubenswrapper[2988]: I1203 00:02:30.370449 2988 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_ae85115fdc231b4002b57317b41a6400/kube-apiserver-check-endpoints/1.log" Dec 03 00:02:30 crc kubenswrapper[2988]: I1203 00:02:30.373780 2988 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="8b7d1a71fb42c2b734b2de91015cb7a197949c7b4aca2f3eade37c41d44c78c3" exitCode=255 Dec 03 00:02:30 crc kubenswrapper[2988]: I1203 00:02:30.373842 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerDied","Data":"8b7d1a71fb42c2b734b2de91015cb7a197949c7b4aca2f3eade37c41d44c78c3"} Dec 03 00:02:30 crc kubenswrapper[2988]: I1203 00:02:30.374021 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:30 crc kubenswrapper[2988]: I1203 00:02:30.375450 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:30 crc kubenswrapper[2988]: I1203 00:02:30.375527 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:30 crc kubenswrapper[2988]: I1203 00:02:30.375553 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:30 crc kubenswrapper[2988]: I1203 00:02:30.378737 2988 scope.go:117] "RemoveContainer" containerID="8b7d1a71fb42c2b734b2de91015cb7a197949c7b4aca2f3eade37c41d44c78c3" Dec 03 00:02:30 crc kubenswrapper[2988]: E1203 00:02:30.437615 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:02:30 crc kubenswrapper[2988]: I1203 00:02:30.511517 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:30 crc kubenswrapper[2988]: I1203 00:02:30.513312 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:30 crc kubenswrapper[2988]: I1203 00:02:30.513377 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:30 crc kubenswrapper[2988]: I1203 00:02:30.513397 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:30 crc kubenswrapper[2988]: I1203 00:02:30.513438 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:02:30 crc kubenswrapper[2988]: E1203 00:02:30.518277 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:02:30 crc kubenswrapper[2988]: I1203 00:02:30.781621 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:31 crc kubenswrapper[2988]: I1203 00:02:31.377731 2988 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_ae85115fdc231b4002b57317b41a6400/kube-apiserver-check-endpoints/1.log" Dec 03 00:02:31 crc kubenswrapper[2988]: I1203 00:02:31.379499 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"b20d9e3247e3ca216c562c550c7a83a115ba0a89b5d1d090a2aa032014db1011"} Dec 03 00:02:31 crc kubenswrapper[2988]: I1203 00:02:31.379633 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:31 crc kubenswrapper[2988]: I1203 00:02:31.380253 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:31 crc kubenswrapper[2988]: I1203 00:02:31.380280 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:31 crc kubenswrapper[2988]: I1203 00:02:31.380291 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:31 crc kubenswrapper[2988]: I1203 00:02:31.781378 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:32 crc kubenswrapper[2988]: I1203 00:02:32.109280 2988 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Dec 03 00:02:32 crc kubenswrapper[2988]: I1203 00:02:32.109383 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 00:02:32 crc kubenswrapper[2988]: I1203 00:02:32.782023 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:32 crc kubenswrapper[2988]: E1203 00:02:32.878621 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:02:32 crc kubenswrapper[2988]: I1203 00:02:32.997747 2988 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:32 crc kubenswrapper[2988]: I1203 00:02:32.998083 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:33 crc kubenswrapper[2988]: I1203 00:02:33.001852 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:33 crc kubenswrapper[2988]: I1203 00:02:33.001914 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:33 crc kubenswrapper[2988]: I1203 00:02:33.001934 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:33 crc kubenswrapper[2988]: W1203 00:02:33.204303 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:33 crc kubenswrapper[2988]: E1203 00:02:33.204404 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:33 crc kubenswrapper[2988]: I1203 00:02:33.782088 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:34 crc kubenswrapper[2988]: I1203 00:02:34.772492 2988 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:34 crc kubenswrapper[2988]: I1203 00:02:34.772777 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:34 crc kubenswrapper[2988]: I1203 00:02:34.775321 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:34 crc kubenswrapper[2988]: I1203 00:02:34.775423 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:34 crc kubenswrapper[2988]: I1203 00:02:34.775444 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:34 crc kubenswrapper[2988]: I1203 00:02:34.779518 2988 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:34 crc kubenswrapper[2988]: I1203 00:02:34.781669 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:35 crc kubenswrapper[2988]: I1203 00:02:35.390928 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:35 crc kubenswrapper[2988]: I1203 00:02:35.391954 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:35 crc kubenswrapper[2988]: I1203 00:02:35.391998 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:35 crc kubenswrapper[2988]: I1203 00:02:35.392018 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:35 crc kubenswrapper[2988]: I1203 00:02:35.781917 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:36 crc kubenswrapper[2988]: I1203 00:02:36.781708 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:36 crc kubenswrapper[2988]: E1203 00:02:36.986971 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9be9d2fd1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,LastTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:02:37 crc kubenswrapper[2988]: E1203 00:02:37.440517 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:02:37 crc kubenswrapper[2988]: I1203 00:02:37.519353 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:37 crc kubenswrapper[2988]: I1203 00:02:37.520584 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:37 crc kubenswrapper[2988]: I1203 00:02:37.520649 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:37 crc kubenswrapper[2988]: I1203 00:02:37.520664 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:37 crc kubenswrapper[2988]: I1203 00:02:37.520692 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:02:37 crc kubenswrapper[2988]: E1203 00:02:37.522284 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:02:37 crc kubenswrapper[2988]: I1203 00:02:37.771554 2988 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 03 00:02:37 crc kubenswrapper[2988]: I1203 00:02:37.771771 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:37 crc kubenswrapper[2988]: I1203 00:02:37.772905 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:37 crc kubenswrapper[2988]: I1203 00:02:37.772959 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:37 crc kubenswrapper[2988]: I1203 00:02:37.772975 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:37 crc kubenswrapper[2988]: I1203 00:02:37.808718 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:37 crc kubenswrapper[2988]: I1203 00:02:37.815627 2988 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 03 00:02:38 crc kubenswrapper[2988]: I1203 00:02:38.120024 2988 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:02:38 crc kubenswrapper[2988]: I1203 00:02:38.120213 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:02:38 crc kubenswrapper[2988]: I1203 00:02:38.399652 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:38 crc kubenswrapper[2988]: I1203 00:02:38.400856 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:38 crc kubenswrapper[2988]: I1203 00:02:38.400966 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:38 crc kubenswrapper[2988]: I1203 00:02:38.400980 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:38 crc kubenswrapper[2988]: I1203 00:02:38.782371 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:39 crc kubenswrapper[2988]: I1203 00:02:39.781888 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:40 crc kubenswrapper[2988]: I1203 00:02:40.781257 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:41 crc kubenswrapper[2988]: I1203 00:02:41.781841 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:42 crc kubenswrapper[2988]: I1203 00:02:42.782427 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:42 crc kubenswrapper[2988]: E1203 00:02:42.879765 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:02:43 crc kubenswrapper[2988]: I1203 00:02:43.004918 2988 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:02:43 crc kubenswrapper[2988]: I1203 00:02:43.005209 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:43 crc kubenswrapper[2988]: I1203 00:02:43.006726 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:43 crc kubenswrapper[2988]: I1203 00:02:43.006764 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:43 crc kubenswrapper[2988]: I1203 00:02:43.006773 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:43 crc kubenswrapper[2988]: W1203 00:02:43.194974 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:43 crc kubenswrapper[2988]: E1203 00:02:43.195048 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:43 crc kubenswrapper[2988]: I1203 00:02:43.781885 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:44 crc kubenswrapper[2988]: E1203 00:02:44.444937 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:02:44 crc kubenswrapper[2988]: I1203 00:02:44.523346 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:44 crc kubenswrapper[2988]: I1203 00:02:44.525195 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:44 crc kubenswrapper[2988]: I1203 00:02:44.525238 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:44 crc kubenswrapper[2988]: I1203 00:02:44.525251 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:44 crc kubenswrapper[2988]: I1203 00:02:44.525284 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:02:44 crc kubenswrapper[2988]: E1203 00:02:44.526837 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:02:44 crc kubenswrapper[2988]: I1203 00:02:44.782670 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:45 crc kubenswrapper[2988]: W1203 00:02:45.527073 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:45 crc kubenswrapper[2988]: E1203 00:02:45.527209 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:45 crc kubenswrapper[2988]: I1203 00:02:45.782091 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:46 crc kubenswrapper[2988]: I1203 00:02:46.781855 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:46 crc kubenswrapper[2988]: E1203 00:02:46.990257 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9be9d2fd1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,LastTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:02:47 crc kubenswrapper[2988]: I1203 00:02:47.677791 2988 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:41662->192.168.126.11:10357: read: connection reset by peer" start-of-body= Dec 03 00:02:47 crc kubenswrapper[2988]: I1203 00:02:47.677875 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:41662->192.168.126.11:10357: read: connection reset by peer" Dec 03 00:02:47 crc kubenswrapper[2988]: I1203 00:02:47.677930 2988 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:02:47 crc kubenswrapper[2988]: I1203 00:02:47.678057 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:47 crc kubenswrapper[2988]: I1203 00:02:47.679420 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:47 crc kubenswrapper[2988]: I1203 00:02:47.679479 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:47 crc kubenswrapper[2988]: I1203 00:02:47.679498 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:47 crc kubenswrapper[2988]: I1203 00:02:47.682140 2988 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"b19465ece4cec9ad010d717fc18f1c8d9207db686532fe862af9fd8ac1be6519"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Dec 03 00:02:47 crc kubenswrapper[2988]: I1203 00:02:47.682578 2988 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" containerID="cri-o://b19465ece4cec9ad010d717fc18f1c8d9207db686532fe862af9fd8ac1be6519" gracePeriod=30 Dec 03 00:02:47 crc kubenswrapper[2988]: I1203 00:02:47.782088 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:48 crc kubenswrapper[2988]: I1203 00:02:48.430931 2988 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/1.log" Dec 03 00:02:48 crc kubenswrapper[2988]: I1203 00:02:48.431600 2988 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="b19465ece4cec9ad010d717fc18f1c8d9207db686532fe862af9fd8ac1be6519" exitCode=255 Dec 03 00:02:48 crc kubenswrapper[2988]: I1203 00:02:48.431655 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"b19465ece4cec9ad010d717fc18f1c8d9207db686532fe862af9fd8ac1be6519"} Dec 03 00:02:48 crc kubenswrapper[2988]: I1203 00:02:48.431688 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"b8fde73fe3ef141c468f90b72c4627777b57b821eb23714182f823955e62ae11"} Dec 03 00:02:48 crc kubenswrapper[2988]: I1203 00:02:48.431778 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:48 crc kubenswrapper[2988]: I1203 00:02:48.433288 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:48 crc kubenswrapper[2988]: I1203 00:02:48.433352 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:48 crc kubenswrapper[2988]: I1203 00:02:48.433378 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:48 crc kubenswrapper[2988]: I1203 00:02:48.781537 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:49 crc kubenswrapper[2988]: I1203 00:02:49.781531 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:50 crc kubenswrapper[2988]: I1203 00:02:50.782271 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:51 crc kubenswrapper[2988]: E1203 00:02:51.447744 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:02:51 crc kubenswrapper[2988]: W1203 00:02:51.527868 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:51 crc kubenswrapper[2988]: E1203 00:02:51.528027 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:51 crc kubenswrapper[2988]: I1203 00:02:51.527895 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:51 crc kubenswrapper[2988]: I1203 00:02:51.529710 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:51 crc kubenswrapper[2988]: I1203 00:02:51.529764 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:51 crc kubenswrapper[2988]: I1203 00:02:51.529790 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:51 crc kubenswrapper[2988]: I1203 00:02:51.529830 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:02:51 crc kubenswrapper[2988]: E1203 00:02:51.531694 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:02:51 crc kubenswrapper[2988]: I1203 00:02:51.782585 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:52 crc kubenswrapper[2988]: I1203 00:02:52.781492 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:52 crc kubenswrapper[2988]: E1203 00:02:52.879951 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:02:53 crc kubenswrapper[2988]: I1203 00:02:53.781400 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:54 crc kubenswrapper[2988]: I1203 00:02:54.781534 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:55 crc kubenswrapper[2988]: I1203 00:02:55.055438 2988 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:02:55 crc kubenswrapper[2988]: I1203 00:02:55.055704 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:55 crc kubenswrapper[2988]: I1203 00:02:55.058140 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:55 crc kubenswrapper[2988]: I1203 00:02:55.058218 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:55 crc kubenswrapper[2988]: I1203 00:02:55.058298 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:55 crc kubenswrapper[2988]: I1203 00:02:55.119257 2988 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:02:55 crc kubenswrapper[2988]: I1203 00:02:55.452452 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:55 crc kubenswrapper[2988]: I1203 00:02:55.453885 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:55 crc kubenswrapper[2988]: I1203 00:02:55.453957 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:55 crc kubenswrapper[2988]: I1203 00:02:55.453978 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:55 crc kubenswrapper[2988]: I1203 00:02:55.782511 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:56 crc kubenswrapper[2988]: I1203 00:02:56.782761 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:56 crc kubenswrapper[2988]: E1203 00:02:56.993095 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9be9d2fd1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,LastTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:02:57 crc kubenswrapper[2988]: I1203 00:02:57.782080 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:58 crc kubenswrapper[2988]: W1203 00:02:58.061266 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:58 crc kubenswrapper[2988]: E1203 00:02:58.061380 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:58 crc kubenswrapper[2988]: I1203 00:02:58.119589 2988 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:02:58 crc kubenswrapper[2988]: I1203 00:02:58.119727 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:02:58 crc kubenswrapper[2988]: E1203 00:02:58.450549 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:02:58 crc kubenswrapper[2988]: I1203 00:02:58.532353 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:02:58 crc kubenswrapper[2988]: I1203 00:02:58.533525 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:02:58 crc kubenswrapper[2988]: I1203 00:02:58.533555 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:02:58 crc kubenswrapper[2988]: I1203 00:02:58.533564 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:02:58 crc kubenswrapper[2988]: I1203 00:02:58.533587 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:02:58 crc kubenswrapper[2988]: E1203 00:02:58.534967 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:02:58 crc kubenswrapper[2988]: I1203 00:02:58.782029 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:02:59 crc kubenswrapper[2988]: I1203 00:02:59.782448 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:00 crc kubenswrapper[2988]: I1203 00:03:00.782202 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:01 crc kubenswrapper[2988]: I1203 00:03:01.781763 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:02 crc kubenswrapper[2988]: I1203 00:03:02.781319 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:02 crc kubenswrapper[2988]: E1203 00:03:02.881006 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:03:03 crc kubenswrapper[2988]: I1203 00:03:03.782783 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:04 crc kubenswrapper[2988]: I1203 00:03:04.781672 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:05 crc kubenswrapper[2988]: E1203 00:03:05.452574 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:03:05 crc kubenswrapper[2988]: I1203 00:03:05.536246 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:05 crc kubenswrapper[2988]: I1203 00:03:05.538036 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:05 crc kubenswrapper[2988]: I1203 00:03:05.538122 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:05 crc kubenswrapper[2988]: I1203 00:03:05.538190 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:05 crc kubenswrapper[2988]: I1203 00:03:05.538236 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:03:05 crc kubenswrapper[2988]: E1203 00:03:05.539708 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:03:05 crc kubenswrapper[2988]: I1203 00:03:05.781619 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:06 crc kubenswrapper[2988]: I1203 00:03:06.782116 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:06 crc kubenswrapper[2988]: E1203 00:03:06.995213 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9be9d2fd1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,LastTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:03:07 crc kubenswrapper[2988]: I1203 00:03:07.265199 2988 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:03:07 crc kubenswrapper[2988]: I1203 00:03:07.265326 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:07 crc kubenswrapper[2988]: I1203 00:03:07.266213 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:07 crc kubenswrapper[2988]: I1203 00:03:07.266245 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:07 crc kubenswrapper[2988]: I1203 00:03:07.266254 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:07 crc kubenswrapper[2988]: I1203 00:03:07.782234 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:08 crc kubenswrapper[2988]: I1203 00:03:08.120582 2988 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:03:08 crc kubenswrapper[2988]: I1203 00:03:08.120713 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:03:08 crc kubenswrapper[2988]: I1203 00:03:08.781055 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:08 crc kubenswrapper[2988]: W1203 00:03:08.854003 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:08 crc kubenswrapper[2988]: E1203 00:03:08.854105 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:09 crc kubenswrapper[2988]: I1203 00:03:09.782113 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:10 crc kubenswrapper[2988]: I1203 00:03:10.772799 2988 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:03:10 crc kubenswrapper[2988]: I1203 00:03:10.772974 2988 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:03:10 crc kubenswrapper[2988]: I1203 00:03:10.773012 2988 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:03:10 crc kubenswrapper[2988]: I1203 00:03:10.773049 2988 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:03:10 crc kubenswrapper[2988]: I1203 00:03:10.773071 2988 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:03:10 crc kubenswrapper[2988]: I1203 00:03:10.781909 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:11 crc kubenswrapper[2988]: I1203 00:03:11.782091 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:12 crc kubenswrapper[2988]: E1203 00:03:12.454588 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:03:12 crc kubenswrapper[2988]: I1203 00:03:12.540856 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:12 crc kubenswrapper[2988]: I1203 00:03:12.542349 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:12 crc kubenswrapper[2988]: I1203 00:03:12.542411 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:12 crc kubenswrapper[2988]: I1203 00:03:12.542427 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:12 crc kubenswrapper[2988]: I1203 00:03:12.542456 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:03:12 crc kubenswrapper[2988]: E1203 00:03:12.543840 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:03:12 crc kubenswrapper[2988]: I1203 00:03:12.781329 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:12 crc kubenswrapper[2988]: E1203 00:03:12.881800 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:03:13 crc kubenswrapper[2988]: I1203 00:03:13.781470 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:14 crc kubenswrapper[2988]: I1203 00:03:14.782077 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:15 crc kubenswrapper[2988]: I1203 00:03:15.782132 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:16 crc kubenswrapper[2988]: I1203 00:03:16.781143 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:16 crc kubenswrapper[2988]: E1203 00:03:16.997761 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9be9d2fd1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,LastTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:03:17 crc kubenswrapper[2988]: I1203 00:03:17.780985 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:18 crc kubenswrapper[2988]: I1203 00:03:18.120990 2988 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:03:18 crc kubenswrapper[2988]: I1203 00:03:18.121191 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:03:18 crc kubenswrapper[2988]: I1203 00:03:18.121396 2988 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:03:18 crc kubenswrapper[2988]: I1203 00:03:18.121602 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:18 crc kubenswrapper[2988]: I1203 00:03:18.123131 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:18 crc kubenswrapper[2988]: I1203 00:03:18.123237 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:18 crc kubenswrapper[2988]: I1203 00:03:18.123281 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:18 crc kubenswrapper[2988]: I1203 00:03:18.126689 2988 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"b8fde73fe3ef141c468f90b72c4627777b57b821eb23714182f823955e62ae11"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Dec 03 00:03:18 crc kubenswrapper[2988]: I1203 00:03:18.127382 2988 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" containerID="cri-o://b8fde73fe3ef141c468f90b72c4627777b57b821eb23714182f823955e62ae11" gracePeriod=30 Dec 03 00:03:18 crc kubenswrapper[2988]: I1203 00:03:18.516443 2988 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/2.log" Dec 03 00:03:18 crc kubenswrapper[2988]: I1203 00:03:18.517874 2988 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/1.log" Dec 03 00:03:18 crc kubenswrapper[2988]: I1203 00:03:18.518829 2988 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="b8fde73fe3ef141c468f90b72c4627777b57b821eb23714182f823955e62ae11" exitCode=255 Dec 03 00:03:18 crc kubenswrapper[2988]: I1203 00:03:18.518872 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"b8fde73fe3ef141c468f90b72c4627777b57b821eb23714182f823955e62ae11"} Dec 03 00:03:18 crc kubenswrapper[2988]: I1203 00:03:18.518935 2988 scope.go:117] "RemoveContainer" containerID="b19465ece4cec9ad010d717fc18f1c8d9207db686532fe862af9fd8ac1be6519" Dec 03 00:03:18 crc kubenswrapper[2988]: I1203 00:03:18.781101 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:19 crc kubenswrapper[2988]: E1203 00:03:19.456390 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:03:19 crc kubenswrapper[2988]: I1203 00:03:19.524109 2988 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/2.log" Dec 03 00:03:19 crc kubenswrapper[2988]: I1203 00:03:19.525646 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"7ad8349668684b6af54220a431b23ff368c664fc9d05e6aae75d9a03da7cd854"} Dec 03 00:03:19 crc kubenswrapper[2988]: I1203 00:03:19.525920 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:19 crc kubenswrapper[2988]: I1203 00:03:19.527138 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:19 crc kubenswrapper[2988]: I1203 00:03:19.527217 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:19 crc kubenswrapper[2988]: I1203 00:03:19.527238 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:19 crc kubenswrapper[2988]: I1203 00:03:19.544851 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:19 crc kubenswrapper[2988]: I1203 00:03:19.546441 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:19 crc kubenswrapper[2988]: I1203 00:03:19.546549 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:19 crc kubenswrapper[2988]: I1203 00:03:19.546587 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:19 crc kubenswrapper[2988]: I1203 00:03:19.546611 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:03:19 crc kubenswrapper[2988]: E1203 00:03:19.548084 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:03:19 crc kubenswrapper[2988]: I1203 00:03:19.781678 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:20 crc kubenswrapper[2988]: I1203 00:03:20.528668 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:20 crc kubenswrapper[2988]: I1203 00:03:20.530465 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:20 crc kubenswrapper[2988]: I1203 00:03:20.530522 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:20 crc kubenswrapper[2988]: I1203 00:03:20.530542 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:20 crc kubenswrapper[2988]: I1203 00:03:20.782354 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:21 crc kubenswrapper[2988]: I1203 00:03:21.781925 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:22 crc kubenswrapper[2988]: I1203 00:03:22.782136 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:22 crc kubenswrapper[2988]: E1203 00:03:22.882200 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:03:23 crc kubenswrapper[2988]: I1203 00:03:23.182448 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:23 crc kubenswrapper[2988]: I1203 00:03:23.184725 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:23 crc kubenswrapper[2988]: I1203 00:03:23.185015 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:23 crc kubenswrapper[2988]: I1203 00:03:23.185031 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:23 crc kubenswrapper[2988]: I1203 00:03:23.781914 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:24 crc kubenswrapper[2988]: I1203 00:03:24.781768 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:25 crc kubenswrapper[2988]: W1203 00:03:25.029086 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:25 crc kubenswrapper[2988]: E1203 00:03:25.029300 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:25 crc kubenswrapper[2988]: I1203 00:03:25.055006 2988 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:03:25 crc kubenswrapper[2988]: I1203 00:03:25.055252 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:25 crc kubenswrapper[2988]: I1203 00:03:25.056779 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:25 crc kubenswrapper[2988]: I1203 00:03:25.056830 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:25 crc kubenswrapper[2988]: I1203 00:03:25.056850 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:25 crc kubenswrapper[2988]: I1203 00:03:25.120411 2988 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:03:25 crc kubenswrapper[2988]: I1203 00:03:25.543577 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:25 crc kubenswrapper[2988]: I1203 00:03:25.545452 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:25 crc kubenswrapper[2988]: I1203 00:03:25.545523 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:25 crc kubenswrapper[2988]: I1203 00:03:25.545550 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:25 crc kubenswrapper[2988]: I1203 00:03:25.782524 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:26 crc kubenswrapper[2988]: E1203 00:03:26.458520 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:03:26 crc kubenswrapper[2988]: I1203 00:03:26.548923 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:26 crc kubenswrapper[2988]: I1203 00:03:26.550824 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:26 crc kubenswrapper[2988]: I1203 00:03:26.550865 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:26 crc kubenswrapper[2988]: I1203 00:03:26.550885 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:26 crc kubenswrapper[2988]: I1203 00:03:26.550919 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:03:26 crc kubenswrapper[2988]: E1203 00:03:26.552389 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:03:26 crc kubenswrapper[2988]: I1203 00:03:26.781894 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:27 crc kubenswrapper[2988]: E1203 00:03:26.999950 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9be9d2fd1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,LastTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:03:27 crc kubenswrapper[2988]: I1203 00:03:27.782566 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:28 crc kubenswrapper[2988]: I1203 00:03:28.120793 2988 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:03:28 crc kubenswrapper[2988]: I1203 00:03:28.120947 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:03:28 crc kubenswrapper[2988]: I1203 00:03:28.782500 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:29 crc kubenswrapper[2988]: I1203 00:03:29.781697 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:30 crc kubenswrapper[2988]: I1203 00:03:30.782181 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:31 crc kubenswrapper[2988]: I1203 00:03:31.781510 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:32 crc kubenswrapper[2988]: I1203 00:03:32.781877 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:32 crc kubenswrapper[2988]: W1203 00:03:32.820252 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:32 crc kubenswrapper[2988]: E1203 00:03:32.820340 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:32 crc kubenswrapper[2988]: E1203 00:03:32.883009 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:03:33 crc kubenswrapper[2988]: E1203 00:03:33.461615 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:03:33 crc kubenswrapper[2988]: I1203 00:03:33.553510 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:33 crc kubenswrapper[2988]: I1203 00:03:33.555365 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:33 crc kubenswrapper[2988]: I1203 00:03:33.555427 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:33 crc kubenswrapper[2988]: I1203 00:03:33.555450 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:33 crc kubenswrapper[2988]: I1203 00:03:33.555483 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:03:33 crc kubenswrapper[2988]: E1203 00:03:33.557009 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:03:33 crc kubenswrapper[2988]: I1203 00:03:33.781606 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:34 crc kubenswrapper[2988]: I1203 00:03:34.781473 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:35 crc kubenswrapper[2988]: I1203 00:03:35.782571 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:36 crc kubenswrapper[2988]: I1203 00:03:36.781127 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:37 crc kubenswrapper[2988]: E1203 00:03:37.002734 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9be9d2fd1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,LastTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:03:37 crc kubenswrapper[2988]: I1203 00:03:37.781394 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:38 crc kubenswrapper[2988]: I1203 00:03:38.120092 2988 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:03:38 crc kubenswrapper[2988]: I1203 00:03:38.120252 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:03:38 crc kubenswrapper[2988]: I1203 00:03:38.781698 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:39 crc kubenswrapper[2988]: I1203 00:03:39.781118 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:40 crc kubenswrapper[2988]: E1203 00:03:40.464076 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:03:40 crc kubenswrapper[2988]: I1203 00:03:40.557819 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:40 crc kubenswrapper[2988]: I1203 00:03:40.559398 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:40 crc kubenswrapper[2988]: I1203 00:03:40.559519 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:40 crc kubenswrapper[2988]: I1203 00:03:40.559645 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:40 crc kubenswrapper[2988]: I1203 00:03:40.559762 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:03:40 crc kubenswrapper[2988]: E1203 00:03:40.561083 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:03:40 crc kubenswrapper[2988]: I1203 00:03:40.781421 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:40 crc kubenswrapper[2988]: W1203 00:03:40.985868 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:40 crc kubenswrapper[2988]: E1203 00:03:40.986254 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:41 crc kubenswrapper[2988]: I1203 00:03:41.782053 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:42 crc kubenswrapper[2988]: I1203 00:03:42.781372 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:42 crc kubenswrapper[2988]: E1203 00:03:42.883243 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:03:43 crc kubenswrapper[2988]: I1203 00:03:43.780774 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:44 crc kubenswrapper[2988]: I1203 00:03:44.782340 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:45 crc kubenswrapper[2988]: W1203 00:03:45.017461 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:45 crc kubenswrapper[2988]: E1203 00:03:45.017752 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:45 crc kubenswrapper[2988]: I1203 00:03:45.782003 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:46 crc kubenswrapper[2988]: I1203 00:03:46.182377 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:46 crc kubenswrapper[2988]: I1203 00:03:46.184801 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:46 crc kubenswrapper[2988]: I1203 00:03:46.184876 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:46 crc kubenswrapper[2988]: I1203 00:03:46.184900 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:46 crc kubenswrapper[2988]: I1203 00:03:46.781531 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:47 crc kubenswrapper[2988]: E1203 00:03:47.005040 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9be9d2fd1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,LastTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:03:47 crc kubenswrapper[2988]: E1203 00:03:47.466884 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:03:47 crc kubenswrapper[2988]: I1203 00:03:47.561615 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:47 crc kubenswrapper[2988]: I1203 00:03:47.563793 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:47 crc kubenswrapper[2988]: I1203 00:03:47.563897 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:47 crc kubenswrapper[2988]: I1203 00:03:47.563921 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:47 crc kubenswrapper[2988]: I1203 00:03:47.563957 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:03:47 crc kubenswrapper[2988]: E1203 00:03:47.565768 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:03:47 crc kubenswrapper[2988]: I1203 00:03:47.781368 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.120593 2988 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.120807 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.120878 2988 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.121076 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.122593 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.122765 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.122893 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.127774 2988 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"7ad8349668684b6af54220a431b23ff368c664fc9d05e6aae75d9a03da7cd854"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.128722 2988 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" containerID="cri-o://7ad8349668684b6af54220a431b23ff368c664fc9d05e6aae75d9a03da7cd854" gracePeriod=30 Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.737316 2988 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/3.log" Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.738034 2988 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/2.log" Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.739810 2988 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="7ad8349668684b6af54220a431b23ff368c664fc9d05e6aae75d9a03da7cd854" exitCode=255 Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.739920 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"7ad8349668684b6af54220a431b23ff368c664fc9d05e6aae75d9a03da7cd854"} Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.739982 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"b37aaabda5d627c3a29b2e5cc8cfb03e6858a2b0475fc6ab6dcd5b136395ff56"} Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.740015 2988 scope.go:117] "RemoveContainer" containerID="b8fde73fe3ef141c468f90b72c4627777b57b821eb23714182f823955e62ae11" Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.740147 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.741362 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.741431 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:48 crc kubenswrapper[2988]: I1203 00:03:48.741478 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:49 crc kubenswrapper[2988]: I1203 00:03:49.130355 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:49 crc kubenswrapper[2988]: I1203 00:03:49.747443 2988 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/3.log" Dec 03 00:03:49 crc kubenswrapper[2988]: I1203 00:03:49.781866 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:50 crc kubenswrapper[2988]: I1203 00:03:50.781967 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:51 crc kubenswrapper[2988]: I1203 00:03:51.781892 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:52 crc kubenswrapper[2988]: I1203 00:03:52.182522 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:52 crc kubenswrapper[2988]: I1203 00:03:52.184227 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:52 crc kubenswrapper[2988]: I1203 00:03:52.184287 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:52 crc kubenswrapper[2988]: I1203 00:03:52.184307 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:52 crc kubenswrapper[2988]: I1203 00:03:52.782082 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:52 crc kubenswrapper[2988]: E1203 00:03:52.885042 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:03:53 crc kubenswrapper[2988]: I1203 00:03:53.781640 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:54 crc kubenswrapper[2988]: E1203 00:03:54.468381 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:03:54 crc kubenswrapper[2988]: I1203 00:03:54.566468 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:54 crc kubenswrapper[2988]: I1203 00:03:54.568080 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:54 crc kubenswrapper[2988]: I1203 00:03:54.568137 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:54 crc kubenswrapper[2988]: I1203 00:03:54.568193 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:54 crc kubenswrapper[2988]: I1203 00:03:54.568238 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:03:54 crc kubenswrapper[2988]: E1203 00:03:54.569970 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:03:54 crc kubenswrapper[2988]: I1203 00:03:54.782197 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:55 crc kubenswrapper[2988]: I1203 00:03:55.055051 2988 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:03:55 crc kubenswrapper[2988]: I1203 00:03:55.055254 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:55 crc kubenswrapper[2988]: I1203 00:03:55.056504 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:55 crc kubenswrapper[2988]: I1203 00:03:55.056569 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:55 crc kubenswrapper[2988]: I1203 00:03:55.056588 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:55 crc kubenswrapper[2988]: I1203 00:03:55.119722 2988 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:03:55 crc kubenswrapper[2988]: I1203 00:03:55.767883 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:03:55 crc kubenswrapper[2988]: I1203 00:03:55.768888 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:03:55 crc kubenswrapper[2988]: I1203 00:03:55.768926 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:03:55 crc kubenswrapper[2988]: I1203 00:03:55.768937 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:03:55 crc kubenswrapper[2988]: I1203 00:03:55.782083 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:56 crc kubenswrapper[2988]: I1203 00:03:56.781435 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:57 crc kubenswrapper[2988]: E1203 00:03:57.007463 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9be9d2fd1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,LastTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:03:57 crc kubenswrapper[2988]: E1203 00:03:57.007594 2988 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{crc.187d8b9be9d2fd1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,LastTimestamp:2025-12-03 00:02:10.75821289 +0000 UTC m=+1.214104187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:03:57 crc kubenswrapper[2988]: E1203 00:03:57.008974 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9bf1154793 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,LastTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:03:57 crc kubenswrapper[2988]: I1203 00:03:57.782028 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:58 crc kubenswrapper[2988]: I1203 00:03:58.120048 2988 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:03:58 crc kubenswrapper[2988]: I1203 00:03:58.120230 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:03:58 crc kubenswrapper[2988]: I1203 00:03:58.781702 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:03:59 crc kubenswrapper[2988]: I1203 00:03:59.781447 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:00 crc kubenswrapper[2988]: I1203 00:04:00.781357 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:01 crc kubenswrapper[2988]: E1203 00:04:01.471010 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:04:01 crc kubenswrapper[2988]: I1203 00:04:01.570303 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:01 crc kubenswrapper[2988]: I1203 00:04:01.571454 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:01 crc kubenswrapper[2988]: I1203 00:04:01.571513 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:01 crc kubenswrapper[2988]: I1203 00:04:01.571540 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:01 crc kubenswrapper[2988]: I1203 00:04:01.571583 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:04:01 crc kubenswrapper[2988]: E1203 00:04:01.573322 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:04:01 crc kubenswrapper[2988]: I1203 00:04:01.781435 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:02 crc kubenswrapper[2988]: I1203 00:04:02.782300 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:02 crc kubenswrapper[2988]: E1203 00:04:02.885647 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:04:03 crc kubenswrapper[2988]: I1203 00:04:03.781436 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:04 crc kubenswrapper[2988]: I1203 00:04:04.782275 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:05 crc kubenswrapper[2988]: E1203 00:04:05.319818 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9bf1154793 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,LastTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:04:05 crc kubenswrapper[2988]: I1203 00:04:05.782443 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:06 crc kubenswrapper[2988]: I1203 00:04:06.781464 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:07 crc kubenswrapper[2988]: I1203 00:04:07.781293 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:08 crc kubenswrapper[2988]: I1203 00:04:08.120251 2988 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:04:08 crc kubenswrapper[2988]: I1203 00:04:08.120377 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:04:08 crc kubenswrapper[2988]: I1203 00:04:08.181996 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:08 crc kubenswrapper[2988]: I1203 00:04:08.183807 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:08 crc kubenswrapper[2988]: I1203 00:04:08.183879 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:08 crc kubenswrapper[2988]: I1203 00:04:08.183893 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:08 crc kubenswrapper[2988]: E1203 00:04:08.472881 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:04:08 crc kubenswrapper[2988]: I1203 00:04:08.573985 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:08 crc kubenswrapper[2988]: I1203 00:04:08.575426 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:08 crc kubenswrapper[2988]: I1203 00:04:08.575460 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:08 crc kubenswrapper[2988]: I1203 00:04:08.575475 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:08 crc kubenswrapper[2988]: I1203 00:04:08.575524 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:04:08 crc kubenswrapper[2988]: E1203 00:04:08.577656 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:04:08 crc kubenswrapper[2988]: I1203 00:04:08.781936 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:09 crc kubenswrapper[2988]: I1203 00:04:09.781805 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:10 crc kubenswrapper[2988]: I1203 00:04:10.773745 2988 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:04:10 crc kubenswrapper[2988]: I1203 00:04:10.773926 2988 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:04:10 crc kubenswrapper[2988]: I1203 00:04:10.773969 2988 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:04:10 crc kubenswrapper[2988]: I1203 00:04:10.774000 2988 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:04:10 crc kubenswrapper[2988]: I1203 00:04:10.774038 2988 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:04:10 crc kubenswrapper[2988]: I1203 00:04:10.781722 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:11 crc kubenswrapper[2988]: I1203 00:04:11.781899 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:12 crc kubenswrapper[2988]: I1203 00:04:12.781452 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:12 crc kubenswrapper[2988]: E1203 00:04:12.886206 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:04:13 crc kubenswrapper[2988]: I1203 00:04:13.782114 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:14 crc kubenswrapper[2988]: I1203 00:04:14.782222 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:15 crc kubenswrapper[2988]: E1203 00:04:15.322293 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9bf1154793 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,LastTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:04:15 crc kubenswrapper[2988]: E1203 00:04:15.475440 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:04:15 crc kubenswrapper[2988]: I1203 00:04:15.578723 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:15 crc kubenswrapper[2988]: I1203 00:04:15.579834 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:15 crc kubenswrapper[2988]: I1203 00:04:15.579908 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:15 crc kubenswrapper[2988]: I1203 00:04:15.579923 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:15 crc kubenswrapper[2988]: I1203 00:04:15.579954 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:04:15 crc kubenswrapper[2988]: E1203 00:04:15.581412 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:04:15 crc kubenswrapper[2988]: I1203 00:04:15.782308 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:16 crc kubenswrapper[2988]: I1203 00:04:16.781769 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:17 crc kubenswrapper[2988]: I1203 00:04:17.781523 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.120175 2988 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.120307 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.120361 2988 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.120497 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.121692 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.121733 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.121748 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.123621 2988 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"b37aaabda5d627c3a29b2e5cc8cfb03e6858a2b0475fc6ab6dcd5b136395ff56"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.123982 2988 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" containerID="cri-o://b37aaabda5d627c3a29b2e5cc8cfb03e6858a2b0475fc6ab6dcd5b136395ff56" gracePeriod=30 Dec 03 00:04:18 crc kubenswrapper[2988]: W1203 00:04:18.320523 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:18 crc kubenswrapper[2988]: E1203 00:04:18.320643 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:18 crc kubenswrapper[2988]: W1203 00:04:18.689098 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:18 crc kubenswrapper[2988]: E1203 00:04:18.689535 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.781464 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.834068 2988 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/4.log" Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.834910 2988 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/3.log" Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.836946 2988 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="b37aaabda5d627c3a29b2e5cc8cfb03e6858a2b0475fc6ab6dcd5b136395ff56" exitCode=255 Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.837019 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"b37aaabda5d627c3a29b2e5cc8cfb03e6858a2b0475fc6ab6dcd5b136395ff56"} Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.837068 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"e622af8ee0026a66c407296e9fac3a8bd94d1c0d08c2c38b977c16b5026e859e"} Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.837104 2988 scope.go:117] "RemoveContainer" containerID="7ad8349668684b6af54220a431b23ff368c664fc9d05e6aae75d9a03da7cd854" Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.837406 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.838888 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.838957 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:18 crc kubenswrapper[2988]: I1203 00:04:18.838991 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:19 crc kubenswrapper[2988]: I1203 00:04:19.782295 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:19 crc kubenswrapper[2988]: I1203 00:04:19.841559 2988 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/4.log" Dec 03 00:04:20 crc kubenswrapper[2988]: I1203 00:04:20.781420 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:21 crc kubenswrapper[2988]: I1203 00:04:21.781507 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:22 crc kubenswrapper[2988]: E1203 00:04:22.476633 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:04:22 crc kubenswrapper[2988]: I1203 00:04:22.581744 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:22 crc kubenswrapper[2988]: I1203 00:04:22.583718 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:22 crc kubenswrapper[2988]: I1203 00:04:22.583778 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:22 crc kubenswrapper[2988]: I1203 00:04:22.583798 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:22 crc kubenswrapper[2988]: I1203 00:04:22.583841 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:04:22 crc kubenswrapper[2988]: E1203 00:04:22.585744 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:04:22 crc kubenswrapper[2988]: I1203 00:04:22.781897 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:22 crc kubenswrapper[2988]: E1203 00:04:22.887499 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:04:23 crc kubenswrapper[2988]: I1203 00:04:23.782022 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:24 crc kubenswrapper[2988]: I1203 00:04:24.781407 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:25 crc kubenswrapper[2988]: I1203 00:04:25.055064 2988 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:04:25 crc kubenswrapper[2988]: I1203 00:04:25.056230 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:25 crc kubenswrapper[2988]: I1203 00:04:25.057853 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:25 crc kubenswrapper[2988]: I1203 00:04:25.058109 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:25 crc kubenswrapper[2988]: I1203 00:04:25.058367 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:25 crc kubenswrapper[2988]: I1203 00:04:25.119982 2988 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:04:25 crc kubenswrapper[2988]: E1203 00:04:25.324604 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9bf1154793 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,LastTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:04:25 crc kubenswrapper[2988]: I1203 00:04:25.790903 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:25 crc kubenswrapper[2988]: I1203 00:04:25.864817 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:25 crc kubenswrapper[2988]: I1203 00:04:25.866383 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:25 crc kubenswrapper[2988]: I1203 00:04:25.866458 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:25 crc kubenswrapper[2988]: I1203 00:04:25.866477 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:26 crc kubenswrapper[2988]: W1203 00:04:26.246620 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:26 crc kubenswrapper[2988]: E1203 00:04:26.246694 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:26 crc kubenswrapper[2988]: I1203 00:04:26.781099 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:27 crc kubenswrapper[2988]: I1203 00:04:27.781564 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:28 crc kubenswrapper[2988]: I1203 00:04:28.120764 2988 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:04:28 crc kubenswrapper[2988]: I1203 00:04:28.120883 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:04:28 crc kubenswrapper[2988]: I1203 00:04:28.781760 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:29 crc kubenswrapper[2988]: E1203 00:04:29.479002 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:04:29 crc kubenswrapper[2988]: I1203 00:04:29.586229 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:29 crc kubenswrapper[2988]: I1203 00:04:29.587746 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:29 crc kubenswrapper[2988]: I1203 00:04:29.587803 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:29 crc kubenswrapper[2988]: I1203 00:04:29.587842 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:29 crc kubenswrapper[2988]: I1203 00:04:29.587876 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:04:29 crc kubenswrapper[2988]: E1203 00:04:29.589633 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:04:29 crc kubenswrapper[2988]: I1203 00:04:29.781830 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:30 crc kubenswrapper[2988]: I1203 00:04:30.781646 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:31 crc kubenswrapper[2988]: I1203 00:04:31.782122 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:32 crc kubenswrapper[2988]: W1203 00:04:32.039468 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:32 crc kubenswrapper[2988]: E1203 00:04:32.039649 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:32 crc kubenswrapper[2988]: I1203 00:04:32.781997 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:32 crc kubenswrapper[2988]: E1203 00:04:32.981758 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:04:33 crc kubenswrapper[2988]: I1203 00:04:33.782524 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:34 crc kubenswrapper[2988]: I1203 00:04:34.781886 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:35 crc kubenswrapper[2988]: E1203 00:04:35.327254 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9bf1154793 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,LastTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:04:35 crc kubenswrapper[2988]: I1203 00:04:35.782206 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:36 crc kubenswrapper[2988]: E1203 00:04:36.480976 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:04:36 crc kubenswrapper[2988]: I1203 00:04:36.590283 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:36 crc kubenswrapper[2988]: I1203 00:04:36.592145 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:36 crc kubenswrapper[2988]: I1203 00:04:36.592240 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:36 crc kubenswrapper[2988]: I1203 00:04:36.592259 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:36 crc kubenswrapper[2988]: I1203 00:04:36.592298 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:04:36 crc kubenswrapper[2988]: E1203 00:04:36.593896 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:04:36 crc kubenswrapper[2988]: I1203 00:04:36.781845 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:37 crc kubenswrapper[2988]: I1203 00:04:37.781852 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:38 crc kubenswrapper[2988]: I1203 00:04:38.120654 2988 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:04:38 crc kubenswrapper[2988]: I1203 00:04:38.120774 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:04:38 crc kubenswrapper[2988]: I1203 00:04:38.782287 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:39 crc kubenswrapper[2988]: I1203 00:04:39.781715 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:40 crc kubenswrapper[2988]: I1203 00:04:40.781875 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:41 crc kubenswrapper[2988]: I1203 00:04:41.781951 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:42 crc kubenswrapper[2988]: I1203 00:04:42.782306 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:42 crc kubenswrapper[2988]: E1203 00:04:42.982573 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:04:43 crc kubenswrapper[2988]: E1203 00:04:43.482990 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:04:43 crc kubenswrapper[2988]: I1203 00:04:43.594715 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:43 crc kubenswrapper[2988]: I1203 00:04:43.596661 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:43 crc kubenswrapper[2988]: I1203 00:04:43.596716 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:43 crc kubenswrapper[2988]: I1203 00:04:43.596737 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:43 crc kubenswrapper[2988]: I1203 00:04:43.596775 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:04:43 crc kubenswrapper[2988]: E1203 00:04:43.598409 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:04:43 crc kubenswrapper[2988]: I1203 00:04:43.781351 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:44 crc kubenswrapper[2988]: I1203 00:04:44.781838 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:45 crc kubenswrapper[2988]: E1203 00:04:45.329593 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9bf1154793 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,LastTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:04:45 crc kubenswrapper[2988]: I1203 00:04:45.781609 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:46 crc kubenswrapper[2988]: I1203 00:04:46.782407 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:47 crc kubenswrapper[2988]: I1203 00:04:47.782074 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:48 crc kubenswrapper[2988]: I1203 00:04:48.121282 2988 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:04:48 crc kubenswrapper[2988]: I1203 00:04:48.121441 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:04:48 crc kubenswrapper[2988]: I1203 00:04:48.121560 2988 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:04:48 crc kubenswrapper[2988]: I1203 00:04:48.121824 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:48 crc kubenswrapper[2988]: I1203 00:04:48.123562 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:48 crc kubenswrapper[2988]: I1203 00:04:48.123624 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:48 crc kubenswrapper[2988]: I1203 00:04:48.123651 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:48 crc kubenswrapper[2988]: I1203 00:04:48.126723 2988 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"e622af8ee0026a66c407296e9fac3a8bd94d1c0d08c2c38b977c16b5026e859e"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Dec 03 00:04:48 crc kubenswrapper[2988]: I1203 00:04:48.127430 2988 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" containerID="cri-o://e622af8ee0026a66c407296e9fac3a8bd94d1c0d08c2c38b977c16b5026e859e" gracePeriod=30 Dec 03 00:04:48 crc kubenswrapper[2988]: E1203 00:04:48.213282 2988 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(bd6a3a59e513625ca0ae3724df2686bc)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" Dec 03 00:04:48 crc kubenswrapper[2988]: I1203 00:04:48.781642 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:49 crc kubenswrapper[2988]: I1203 00:04:49.030948 2988 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/5.log" Dec 03 00:04:49 crc kubenswrapper[2988]: I1203 00:04:49.031940 2988 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/4.log" Dec 03 00:04:49 crc kubenswrapper[2988]: I1203 00:04:49.033479 2988 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="e622af8ee0026a66c407296e9fac3a8bd94d1c0d08c2c38b977c16b5026e859e" exitCode=255 Dec 03 00:04:49 crc kubenswrapper[2988]: I1203 00:04:49.033532 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"e622af8ee0026a66c407296e9fac3a8bd94d1c0d08c2c38b977c16b5026e859e"} Dec 03 00:04:49 crc kubenswrapper[2988]: I1203 00:04:49.033635 2988 scope.go:117] "RemoveContainer" containerID="b37aaabda5d627c3a29b2e5cc8cfb03e6858a2b0475fc6ab6dcd5b136395ff56" Dec 03 00:04:49 crc kubenswrapper[2988]: I1203 00:04:49.033822 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:49 crc kubenswrapper[2988]: I1203 00:04:49.034971 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:49 crc kubenswrapper[2988]: I1203 00:04:49.035032 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:49 crc kubenswrapper[2988]: I1203 00:04:49.035058 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:49 crc kubenswrapper[2988]: I1203 00:04:49.037634 2988 scope.go:117] "RemoveContainer" containerID="e622af8ee0026a66c407296e9fac3a8bd94d1c0d08c2c38b977c16b5026e859e" Dec 03 00:04:49 crc kubenswrapper[2988]: E1203 00:04:49.039021 2988 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(bd6a3a59e513625ca0ae3724df2686bc)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" Dec 03 00:04:49 crc kubenswrapper[2988]: I1203 00:04:49.723527 2988 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:04:49 crc kubenswrapper[2988]: I1203 00:04:49.781481 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:50 crc kubenswrapper[2988]: I1203 00:04:50.038981 2988 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/5.log" Dec 03 00:04:50 crc kubenswrapper[2988]: I1203 00:04:50.040863 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:50 crc kubenswrapper[2988]: I1203 00:04:50.041874 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:50 crc kubenswrapper[2988]: I1203 00:04:50.041949 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:50 crc kubenswrapper[2988]: I1203 00:04:50.041974 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:50 crc kubenswrapper[2988]: I1203 00:04:50.044737 2988 scope.go:117] "RemoveContainer" containerID="e622af8ee0026a66c407296e9fac3a8bd94d1c0d08c2c38b977c16b5026e859e" Dec 03 00:04:50 crc kubenswrapper[2988]: E1203 00:04:50.046039 2988 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(bd6a3a59e513625ca0ae3724df2686bc)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" Dec 03 00:04:50 crc kubenswrapper[2988]: E1203 00:04:50.484759 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:04:50 crc kubenswrapper[2988]: I1203 00:04:50.599035 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:50 crc kubenswrapper[2988]: I1203 00:04:50.600653 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:50 crc kubenswrapper[2988]: I1203 00:04:50.600713 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:50 crc kubenswrapper[2988]: I1203 00:04:50.600741 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:50 crc kubenswrapper[2988]: I1203 00:04:50.600860 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:04:50 crc kubenswrapper[2988]: E1203 00:04:50.603142 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:04:50 crc kubenswrapper[2988]: I1203 00:04:50.781661 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:51 crc kubenswrapper[2988]: I1203 00:04:51.781664 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:52 crc kubenswrapper[2988]: I1203 00:04:52.181948 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:52 crc kubenswrapper[2988]: I1203 00:04:52.183750 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:52 crc kubenswrapper[2988]: I1203 00:04:52.183806 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:52 crc kubenswrapper[2988]: I1203 00:04:52.183828 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:52 crc kubenswrapper[2988]: I1203 00:04:52.781868 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:52 crc kubenswrapper[2988]: E1203 00:04:52.983255 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:04:53 crc kubenswrapper[2988]: I1203 00:04:53.781732 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:54 crc kubenswrapper[2988]: I1203 00:04:54.781328 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:55 crc kubenswrapper[2988]: E1203 00:04:55.331770 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9bf1154793 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,LastTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:04:55 crc kubenswrapper[2988]: I1203 00:04:55.781179 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:56 crc kubenswrapper[2988]: I1203 00:04:56.781274 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:57 crc kubenswrapper[2988]: W1203 00:04:57.173844 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:57 crc kubenswrapper[2988]: E1203 00:04:57.173930 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:57 crc kubenswrapper[2988]: E1203 00:04:57.487447 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:04:57 crc kubenswrapper[2988]: I1203 00:04:57.604114 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:04:57 crc kubenswrapper[2988]: I1203 00:04:57.605546 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:04:57 crc kubenswrapper[2988]: I1203 00:04:57.605608 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:04:57 crc kubenswrapper[2988]: I1203 00:04:57.605637 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:04:57 crc kubenswrapper[2988]: I1203 00:04:57.605671 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:04:57 crc kubenswrapper[2988]: E1203 00:04:57.607254 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:04:57 crc kubenswrapper[2988]: I1203 00:04:57.781347 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:58 crc kubenswrapper[2988]: I1203 00:04:58.781808 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:04:59 crc kubenswrapper[2988]: I1203 00:04:59.781800 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:00 crc kubenswrapper[2988]: I1203 00:05:00.782392 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:01 crc kubenswrapper[2988]: I1203 00:05:01.781327 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:02 crc kubenswrapper[2988]: I1203 00:05:02.782061 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:02 crc kubenswrapper[2988]: E1203 00:05:02.984289 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:05:03 crc kubenswrapper[2988]: I1203 00:05:03.781555 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:04 crc kubenswrapper[2988]: I1203 00:05:04.182318 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:05:04 crc kubenswrapper[2988]: I1203 00:05:04.183954 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:05:04 crc kubenswrapper[2988]: I1203 00:05:04.184012 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:05:04 crc kubenswrapper[2988]: I1203 00:05:04.184034 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:05:04 crc kubenswrapper[2988]: I1203 00:05:04.186834 2988 scope.go:117] "RemoveContainer" containerID="e622af8ee0026a66c407296e9fac3a8bd94d1c0d08c2c38b977c16b5026e859e" Dec 03 00:05:04 crc kubenswrapper[2988]: E1203 00:05:04.188179 2988 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(bd6a3a59e513625ca0ae3724df2686bc)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" Dec 03 00:05:04 crc kubenswrapper[2988]: E1203 00:05:04.490232 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:05:04 crc kubenswrapper[2988]: I1203 00:05:04.607682 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:05:04 crc kubenswrapper[2988]: I1203 00:05:04.608780 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:05:04 crc kubenswrapper[2988]: I1203 00:05:04.608806 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:05:04 crc kubenswrapper[2988]: I1203 00:05:04.608817 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:05:04 crc kubenswrapper[2988]: I1203 00:05:04.608839 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:05:04 crc kubenswrapper[2988]: E1203 00:05:04.609375 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:05:04 crc kubenswrapper[2988]: I1203 00:05:04.781496 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:05 crc kubenswrapper[2988]: I1203 00:05:05.182546 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:05:05 crc kubenswrapper[2988]: I1203 00:05:05.184256 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:05:05 crc kubenswrapper[2988]: I1203 00:05:05.184329 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:05:05 crc kubenswrapper[2988]: I1203 00:05:05.184358 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:05:05 crc kubenswrapper[2988]: E1203 00:05:05.333916 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9bf1154793 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,LastTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:05:05 crc kubenswrapper[2988]: I1203 00:05:05.781869 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:06 crc kubenswrapper[2988]: W1203 00:05:06.721632 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:06 crc kubenswrapper[2988]: E1203 00:05:06.721780 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:06 crc kubenswrapper[2988]: I1203 00:05:06.782334 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:07 crc kubenswrapper[2988]: I1203 00:05:07.781412 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:08 crc kubenswrapper[2988]: W1203 00:05:08.444881 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:08 crc kubenswrapper[2988]: E1203 00:05:08.445121 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:08 crc kubenswrapper[2988]: I1203 00:05:08.781858 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:09 crc kubenswrapper[2988]: I1203 00:05:09.781035 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:10 crc kubenswrapper[2988]: I1203 00:05:10.774959 2988 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:05:10 crc kubenswrapper[2988]: I1203 00:05:10.775841 2988 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:05:10 crc kubenswrapper[2988]: I1203 00:05:10.775907 2988 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:05:10 crc kubenswrapper[2988]: I1203 00:05:10.775934 2988 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:05:10 crc kubenswrapper[2988]: I1203 00:05:10.775952 2988 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:05:10 crc kubenswrapper[2988]: I1203 00:05:10.783812 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:11 crc kubenswrapper[2988]: E1203 00:05:11.492425 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:05:11 crc kubenswrapper[2988]: I1203 00:05:11.609577 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:05:11 crc kubenswrapper[2988]: I1203 00:05:11.612520 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:05:11 crc kubenswrapper[2988]: I1203 00:05:11.612607 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:05:11 crc kubenswrapper[2988]: I1203 00:05:11.612732 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:05:11 crc kubenswrapper[2988]: I1203 00:05:11.612777 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:05:11 crc kubenswrapper[2988]: E1203 00:05:11.614438 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:05:11 crc kubenswrapper[2988]: I1203 00:05:11.781793 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:12 crc kubenswrapper[2988]: I1203 00:05:12.182344 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:05:12 crc kubenswrapper[2988]: I1203 00:05:12.184960 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:05:12 crc kubenswrapper[2988]: I1203 00:05:12.185018 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:05:12 crc kubenswrapper[2988]: I1203 00:05:12.185040 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:05:12 crc kubenswrapper[2988]: I1203 00:05:12.782219 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:12 crc kubenswrapper[2988]: E1203 00:05:12.985500 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:05:13 crc kubenswrapper[2988]: I1203 00:05:13.781509 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:14 crc kubenswrapper[2988]: I1203 00:05:14.782351 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:15 crc kubenswrapper[2988]: E1203 00:05:15.336650 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9bf1154793 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,LastTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:05:15 crc kubenswrapper[2988]: I1203 00:05:15.781285 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:16 crc kubenswrapper[2988]: I1203 00:05:16.182610 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:05:16 crc kubenswrapper[2988]: I1203 00:05:16.184298 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:05:16 crc kubenswrapper[2988]: I1203 00:05:16.184383 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:05:16 crc kubenswrapper[2988]: I1203 00:05:16.184402 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:05:16 crc kubenswrapper[2988]: I1203 00:05:16.781827 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:17 crc kubenswrapper[2988]: I1203 00:05:17.781403 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:18 crc kubenswrapper[2988]: E1203 00:05:18.494299 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:05:18 crc kubenswrapper[2988]: I1203 00:05:18.614972 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:05:18 crc kubenswrapper[2988]: I1203 00:05:18.616526 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:05:18 crc kubenswrapper[2988]: I1203 00:05:18.616582 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:05:18 crc kubenswrapper[2988]: I1203 00:05:18.616599 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:05:18 crc kubenswrapper[2988]: I1203 00:05:18.616638 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:05:18 crc kubenswrapper[2988]: E1203 00:05:18.618030 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:05:18 crc kubenswrapper[2988]: I1203 00:05:18.781669 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:19 crc kubenswrapper[2988]: I1203 00:05:19.182361 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:05:19 crc kubenswrapper[2988]: I1203 00:05:19.183538 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:05:19 crc kubenswrapper[2988]: I1203 00:05:19.183656 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:05:19 crc kubenswrapper[2988]: I1203 00:05:19.183753 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:05:19 crc kubenswrapper[2988]: I1203 00:05:19.184994 2988 scope.go:117] "RemoveContainer" containerID="e622af8ee0026a66c407296e9fac3a8bd94d1c0d08c2c38b977c16b5026e859e" Dec 03 00:05:19 crc kubenswrapper[2988]: E1203 00:05:19.185649 2988 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(bd6a3a59e513625ca0ae3724df2686bc)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" Dec 03 00:05:19 crc kubenswrapper[2988]: I1203 00:05:19.782369 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:20 crc kubenswrapper[2988]: I1203 00:05:20.781649 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:21 crc kubenswrapper[2988]: I1203 00:05:21.781405 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:22 crc kubenswrapper[2988]: W1203 00:05:22.147776 2988 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:22 crc kubenswrapper[2988]: E1203 00:05:22.147872 2988 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:22 crc kubenswrapper[2988]: I1203 00:05:22.781215 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:22 crc kubenswrapper[2988]: E1203 00:05:22.986194 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:05:23 crc kubenswrapper[2988]: I1203 00:05:23.781891 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:24 crc kubenswrapper[2988]: I1203 00:05:24.781797 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:25 crc kubenswrapper[2988]: E1203 00:05:25.339440 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9bf1154793 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,LastTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:05:25 crc kubenswrapper[2988]: E1203 00:05:25.495742 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:05:25 crc kubenswrapper[2988]: I1203 00:05:25.618403 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:05:25 crc kubenswrapper[2988]: I1203 00:05:25.620736 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:05:25 crc kubenswrapper[2988]: I1203 00:05:25.620816 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:05:25 crc kubenswrapper[2988]: I1203 00:05:25.620848 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:05:25 crc kubenswrapper[2988]: I1203 00:05:25.620930 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:05:25 crc kubenswrapper[2988]: E1203 00:05:25.622532 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:05:25 crc kubenswrapper[2988]: I1203 00:05:25.781717 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:26 crc kubenswrapper[2988]: I1203 00:05:26.781896 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:27 crc kubenswrapper[2988]: I1203 00:05:27.781636 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:28 crc kubenswrapper[2988]: I1203 00:05:28.781438 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:29 crc kubenswrapper[2988]: I1203 00:05:29.781101 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:30 crc kubenswrapper[2988]: I1203 00:05:30.782301 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:31 crc kubenswrapper[2988]: I1203 00:05:31.781563 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:32 crc kubenswrapper[2988]: I1203 00:05:32.181978 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:05:32 crc kubenswrapper[2988]: I1203 00:05:32.183543 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:05:32 crc kubenswrapper[2988]: I1203 00:05:32.183598 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:05:32 crc kubenswrapper[2988]: I1203 00:05:32.183615 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:05:32 crc kubenswrapper[2988]: I1203 00:05:32.185061 2988 scope.go:117] "RemoveContainer" containerID="e622af8ee0026a66c407296e9fac3a8bd94d1c0d08c2c38b977c16b5026e859e" Dec 03 00:05:32 crc kubenswrapper[2988]: E1203 00:05:32.497795 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:05:32 crc kubenswrapper[2988]: I1203 00:05:32.623407 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:05:32 crc kubenswrapper[2988]: I1203 00:05:32.625179 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:05:32 crc kubenswrapper[2988]: I1203 00:05:32.625226 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:05:32 crc kubenswrapper[2988]: I1203 00:05:32.625257 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:05:32 crc kubenswrapper[2988]: I1203 00:05:32.625335 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:05:32 crc kubenswrapper[2988]: E1203 00:05:32.627006 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:05:32 crc kubenswrapper[2988]: I1203 00:05:32.782468 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:32 crc kubenswrapper[2988]: E1203 00:05:32.987533 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:05:33 crc kubenswrapper[2988]: I1203 00:05:33.163779 2988 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/5.log" Dec 03 00:05:33 crc kubenswrapper[2988]: I1203 00:05:33.164809 2988 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"33df401befe5b046b9d3e6d9ddf42e7ce9a7dd335f6d395a1f85f3c26584d2ff"} Dec 03 00:05:33 crc kubenswrapper[2988]: I1203 00:05:33.164925 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:05:33 crc kubenswrapper[2988]: I1203 00:05:33.165822 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:05:33 crc kubenswrapper[2988]: I1203 00:05:33.165904 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:05:33 crc kubenswrapper[2988]: I1203 00:05:33.165948 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:05:33 crc kubenswrapper[2988]: I1203 00:05:33.781763 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:34 crc kubenswrapper[2988]: I1203 00:05:34.781769 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:35 crc kubenswrapper[2988]: I1203 00:05:35.055728 2988 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:05:35 crc kubenswrapper[2988]: I1203 00:05:35.055909 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:05:35 crc kubenswrapper[2988]: I1203 00:05:35.057289 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:05:35 crc kubenswrapper[2988]: I1203 00:05:35.057332 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:05:35 crc kubenswrapper[2988]: I1203 00:05:35.057345 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:05:35 crc kubenswrapper[2988]: I1203 00:05:35.119741 2988 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:05:35 crc kubenswrapper[2988]: I1203 00:05:35.169876 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:05:35 crc kubenswrapper[2988]: I1203 00:05:35.170657 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:05:35 crc kubenswrapper[2988]: I1203 00:05:35.170726 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:05:35 crc kubenswrapper[2988]: I1203 00:05:35.170736 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:05:35 crc kubenswrapper[2988]: E1203 00:05:35.342300 2988 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187d8b9bf1154793 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,LastTimestamp:2025-12-03 00:02:10.879997843 +0000 UTC m=+1.335889130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:05:35 crc kubenswrapper[2988]: I1203 00:05:35.782333 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:36 crc kubenswrapper[2988]: I1203 00:05:36.782461 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:37 crc kubenswrapper[2988]: I1203 00:05:37.782526 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:38 crc kubenswrapper[2988]: I1203 00:05:38.120713 2988 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:05:38 crc kubenswrapper[2988]: I1203 00:05:38.120861 2988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:05:38 crc kubenswrapper[2988]: I1203 00:05:38.782699 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:39 crc kubenswrapper[2988]: E1203 00:05:39.499955 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 03 00:05:39 crc kubenswrapper[2988]: I1203 00:05:39.627403 2988 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:05:39 crc kubenswrapper[2988]: I1203 00:05:39.629215 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:05:39 crc kubenswrapper[2988]: I1203 00:05:39.629239 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:05:39 crc kubenswrapper[2988]: I1203 00:05:39.629249 2988 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:05:39 crc kubenswrapper[2988]: I1203 00:05:39.629269 2988 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:05:39 crc kubenswrapper[2988]: E1203 00:05:39.630569 2988 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 03 00:05:39 crc kubenswrapper[2988]: I1203 00:05:39.782264 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:40 crc kubenswrapper[2988]: I1203 00:05:40.784819 2988 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 03 00:05:41 crc kubenswrapper[2988]: I1203 00:05:41.655784 2988 reconstruct_new.go:210] "DevicePaths of reconstructed volumes updated" Dec 03 00:05:42 crc kubenswrapper[2988]: E1203 00:05:42.988557 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:05:43 crc systemd[1]: Stopping Kubernetes Kubelet... Dec 03 00:05:43 crc kubenswrapper[2988]: I1203 00:05:43.762127 2988 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 00:05:43 crc systemd[1]: kubelet.service: Deactivated successfully. Dec 03 00:05:43 crc systemd[1]: Stopped Kubernetes Kubelet. Dec 03 00:05:43 crc systemd[1]: kubelet.service: Consumed 12.698s CPU time. -- Boot 38010bfc6eae4b479e75a33beaa73a37 -- Dec 03 00:06:41 crc systemd[1]: Starting Kubernetes Kubelet... Dec 03 00:06:41 crc kubenswrapper[3561]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 00:06:41 crc kubenswrapper[3561]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 00:06:41 crc kubenswrapper[3561]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 00:06:41 crc kubenswrapper[3561]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 00:06:41 crc kubenswrapper[3561]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 00:06:41 crc kubenswrapper[3561]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.354810 3561 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359017 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359062 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359080 3561 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359097 3561 feature_gate.go:227] unrecognized feature gate: Example Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359115 3561 feature_gate.go:227] unrecognized feature gate: ImagePolicy Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359131 3561 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359147 3561 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359159 3561 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359170 3561 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359192 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359205 3561 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359216 3561 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359227 3561 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359238 3561 feature_gate.go:227] unrecognized feature gate: SignatureStores Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359249 3561 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359260 3561 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359272 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359284 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359295 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359307 3561 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359318 3561 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359330 3561 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359341 3561 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359352 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359363 3561 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359374 3561 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359385 3561 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359396 3561 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359407 3561 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359418 3561 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359429 3561 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359442 3561 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359452 3561 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359463 3561 feature_gate.go:227] unrecognized feature gate: InsightsConfig Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359474 3561 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359486 3561 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359498 3561 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359509 3561 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359520 3561 feature_gate.go:227] unrecognized feature gate: PinnedImages Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359531 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359570 3561 feature_gate.go:227] unrecognized feature gate: GatewayAPI Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359584 3561 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359596 3561 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359607 3561 feature_gate.go:227] unrecognized feature gate: MetricsServer Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359618 3561 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359630 3561 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359641 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359652 3561 feature_gate.go:227] unrecognized feature gate: PlatformOperators Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359665 3561 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359676 3561 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359688 3561 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359699 3561 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359712 3561 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359725 3561 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359736 3561 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359747 3561 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359758 3561 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359769 3561 feature_gate.go:227] unrecognized feature gate: NewOLM Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359780 3561 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.359790 3561 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.359907 3561 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.359936 3561 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.359950 3561 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.359961 3561 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.359977 3561 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.359986 3561 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.359999 3561 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360010 3561 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360020 3561 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360029 3561 flags.go:64] FLAG: --azure-container-registry-config="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360038 3561 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360048 3561 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360057 3561 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360067 3561 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360076 3561 flags.go:64] FLAG: --cgroup-root="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360085 3561 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360094 3561 flags.go:64] FLAG: --client-ca-file="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360103 3561 flags.go:64] FLAG: --cloud-config="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360112 3561 flags.go:64] FLAG: --cloud-provider="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360121 3561 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360134 3561 flags.go:64] FLAG: --cluster-domain="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360145 3561 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360155 3561 flags.go:64] FLAG: --config-dir="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360164 3561 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360175 3561 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360186 3561 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360195 3561 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360204 3561 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360214 3561 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360223 3561 flags.go:64] FLAG: --contention-profiling="false" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360232 3561 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360241 3561 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360250 3561 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360259 3561 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360271 3561 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360280 3561 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360289 3561 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360298 3561 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360307 3561 flags.go:64] FLAG: --enable-server="true" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360316 3561 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360327 3561 flags.go:64] FLAG: --event-burst="100" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360337 3561 flags.go:64] FLAG: --event-qps="50" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360346 3561 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360355 3561 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360364 3561 flags.go:64] FLAG: --eviction-hard="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360375 3561 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360384 3561 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360393 3561 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360402 3561 flags.go:64] FLAG: --eviction-soft="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360411 3561 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360420 3561 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360430 3561 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360439 3561 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360450 3561 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360459 3561 flags.go:64] FLAG: --feature-gates="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360470 3561 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360480 3561 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360499 3561 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360509 3561 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360518 3561 flags.go:64] FLAG: --healthz-port="10248" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360528 3561 flags.go:64] FLAG: --help="false" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360537 3561 flags.go:64] FLAG: --hostname-override="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360575 3561 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360585 3561 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360594 3561 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360603 3561 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360612 3561 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360621 3561 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360630 3561 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360639 3561 flags.go:64] FLAG: --iptables-drop-bit="15" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360648 3561 flags.go:64] FLAG: --iptables-masquerade-bit="14" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360657 3561 flags.go:64] FLAG: --keep-terminated-pod-volumes="false" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360665 3561 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360674 3561 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360684 3561 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360693 3561 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360702 3561 flags.go:64] FLAG: --kube-reserved="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360711 3561 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360720 3561 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360729 3561 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360738 3561 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360759 3561 flags.go:64] FLAG: --lock-file="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360768 3561 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360777 3561 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360786 3561 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360809 3561 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360818 3561 flags.go:64] FLAG: --logging-format="text" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360827 3561 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360841 3561 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360873 3561 flags.go:64] FLAG: --manifest-url="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360884 3561 flags.go:64] FLAG: --manifest-url-header="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360899 3561 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360910 3561 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360926 3561 flags.go:64] FLAG: --max-pods="110" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360938 3561 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360949 3561 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360958 3561 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360968 3561 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360977 3561 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360986 3561 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.360995 3561 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361016 3561 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361026 3561 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361035 3561 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361044 3561 flags.go:64] FLAG: --pod-cidr="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361053 3561 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce0319702e115e7248d135e58342ccf3f458e19c39e86dc8e79036f578ce80a4" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361068 3561 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361077 3561 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361086 3561 flags.go:64] FLAG: --pods-per-core="0" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361096 3561 flags.go:64] FLAG: --port="10250" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361107 3561 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361117 3561 flags.go:64] FLAG: --provider-id="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361127 3561 flags.go:64] FLAG: --qos-reserved="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361137 3561 flags.go:64] FLAG: --read-only-port="10255" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361146 3561 flags.go:64] FLAG: --register-node="true" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361156 3561 flags.go:64] FLAG: --register-schedulable="true" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361164 3561 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361181 3561 flags.go:64] FLAG: --registry-burst="10" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361190 3561 flags.go:64] FLAG: --registry-qps="5" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361199 3561 flags.go:64] FLAG: --reserved-cpus="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361208 3561 flags.go:64] FLAG: --reserved-memory="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361222 3561 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361232 3561 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361241 3561 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361250 3561 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361259 3561 flags.go:64] FLAG: --runonce="false" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361268 3561 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361277 3561 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361287 3561 flags.go:64] FLAG: --seccomp-default="false" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361297 3561 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361306 3561 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361315 3561 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361325 3561 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361334 3561 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361343 3561 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361352 3561 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361361 3561 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361370 3561 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361380 3561 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361389 3561 flags.go:64] FLAG: --system-cgroups="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361398 3561 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361412 3561 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361421 3561 flags.go:64] FLAG: --tls-cert-file="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361430 3561 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361442 3561 flags.go:64] FLAG: --tls-min-version="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361451 3561 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361459 3561 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361468 3561 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361477 3561 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361486 3561 flags.go:64] FLAG: --v="2" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361499 3561 flags.go:64] FLAG: --version="false" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361510 3561 flags.go:64] FLAG: --vmodule="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361520 3561 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.361533 3561 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361672 3561 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361685 3561 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361698 3561 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361709 3561 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361719 3561 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361731 3561 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361742 3561 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361753 3561 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361763 3561 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361774 3561 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361785 3561 feature_gate.go:227] unrecognized feature gate: NewOLM Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361797 3561 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361807 3561 feature_gate.go:227] unrecognized feature gate: ImagePolicy Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361818 3561 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361829 3561 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361840 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361851 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361861 3561 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361872 3561 feature_gate.go:227] unrecognized feature gate: Example Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361884 3561 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361895 3561 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361906 3561 feature_gate.go:227] unrecognized feature gate: SignatureStores Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361916 3561 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361927 3561 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361937 3561 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361948 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361959 3561 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361969 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361980 3561 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.361991 3561 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362001 3561 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362016 3561 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362027 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362037 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362048 3561 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362059 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362070 3561 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362080 3561 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362091 3561 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362103 3561 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362115 3561 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362127 3561 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362137 3561 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362148 3561 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362160 3561 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362173 3561 feature_gate.go:227] unrecognized feature gate: InsightsConfig Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362187 3561 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362211 3561 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362234 3561 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362250 3561 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362261 3561 feature_gate.go:227] unrecognized feature gate: PinnedImages Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362273 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362285 3561 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362295 3561 feature_gate.go:227] unrecognized feature gate: GatewayAPI Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362306 3561 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362316 3561 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362327 3561 feature_gate.go:227] unrecognized feature gate: MetricsServer Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362338 3561 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362348 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.362359 3561 feature_gate.go:227] unrecognized feature gate: PlatformOperators Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.362372 3561 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.370686 3561 server.go:487] "Kubelet version" kubeletVersion="v1.29.5+29c95f3" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.370728 3561 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370791 3561 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370802 3561 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370810 3561 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370817 3561 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370825 3561 feature_gate.go:227] unrecognized feature gate: InsightsConfig Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370832 3561 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370840 3561 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370848 3561 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370855 3561 feature_gate.go:227] unrecognized feature gate: PinnedImages Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370863 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370870 3561 feature_gate.go:227] unrecognized feature gate: MetricsServer Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370877 3561 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370885 3561 feature_gate.go:227] unrecognized feature gate: GatewayAPI Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370892 3561 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370899 3561 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370907 3561 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370914 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370921 3561 feature_gate.go:227] unrecognized feature gate: PlatformOperators Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370928 3561 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370935 3561 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370944 3561 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370951 3561 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370958 3561 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370966 3561 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370973 3561 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370980 3561 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370987 3561 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.370994 3561 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371002 3561 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371009 3561 feature_gate.go:227] unrecognized feature gate: NewOLM Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371016 3561 feature_gate.go:227] unrecognized feature gate: Example Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371025 3561 feature_gate.go:227] unrecognized feature gate: ImagePolicy Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371032 3561 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371039 3561 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371049 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371056 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371063 3561 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371071 3561 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371078 3561 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371085 3561 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371092 3561 feature_gate.go:227] unrecognized feature gate: SignatureStores Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371099 3561 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371107 3561 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371114 3561 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371121 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371128 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371135 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371142 3561 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371149 3561 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371156 3561 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371163 3561 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371171 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371179 3561 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371186 3561 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371194 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371201 3561 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371209 3561 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371216 3561 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371223 3561 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371230 3561 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.371239 3561 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371335 3561 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371344 3561 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371351 3561 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371359 3561 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371366 3561 feature_gate.go:227] unrecognized feature gate: NewOLM Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371374 3561 feature_gate.go:227] unrecognized feature gate: Example Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371381 3561 feature_gate.go:227] unrecognized feature gate: ImagePolicy Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371388 3561 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371395 3561 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371404 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371412 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371419 3561 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371427 3561 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371435 3561 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371443 3561 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371450 3561 feature_gate.go:227] unrecognized feature gate: SignatureStores Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371458 3561 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371465 3561 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371472 3561 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371479 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371486 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371493 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371500 3561 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371507 3561 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371514 3561 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371521 3561 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371528 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371551 3561 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371562 3561 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371569 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371576 3561 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371584 3561 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371592 3561 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371599 3561 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371606 3561 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371614 3561 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371622 3561 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371630 3561 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371637 3561 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371645 3561 feature_gate.go:227] unrecognized feature gate: InsightsConfig Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371652 3561 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371659 3561 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371669 3561 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371676 3561 feature_gate.go:227] unrecognized feature gate: PinnedImages Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371684 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371692 3561 feature_gate.go:227] unrecognized feature gate: MetricsServer Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371699 3561 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371707 3561 feature_gate.go:227] unrecognized feature gate: GatewayAPI Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371714 3561 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371721 3561 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371729 3561 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371736 3561 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371743 3561 feature_gate.go:227] unrecognized feature gate: PlatformOperators Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371750 3561 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371757 3561 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371765 3561 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371772 3561 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371779 3561 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371786 3561 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.371793 3561 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.371802 3561 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.372241 3561 server.go:925] "Client rotation is on, will bootstrap in background" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.376043 3561 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.377295 3561 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.377599 3561 server.go:982] "Starting client certificate rotation" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.377612 3561 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.381735 3561 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-05-09 04:21:49.680972012 +0000 UTC Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.381887 3561 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 3772h15m8.299089683s for next certificate rotation Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.385909 3561 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.388207 3561 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.389678 3561 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.405169 3561 remote_runtime.go:143] "Validated CRI v1 runtime API" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.405239 3561 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.442318 3561 remote_image.go:111] "Validated CRI v1 image API" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.450338 3561 fs.go:132] Filesystem UUIDs: map[2025-12-03-00-01-23-00:/dev/sr0 68d6f3e9-64e9-44a4-a1d0-311f9c629a01:/dev/vda4 6ea7ef63-bc43-49c4-9337-b3b14ffb2763:/dev/vda3 7B77-95E7:/dev/vda2] Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.450392 3561 fs.go:133] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.480580 3561 manager.go:217] Machine: {Timestamp:2025-12-03 00:06:41.478207479 +0000 UTC m=+0.258641807 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:c1bd596843fb445da20eca66471ddf66 SystemUUID:c007f178-aa0e-43e3-b9af-eea9bad4fb2f BootID:38010bfc-6eae-4b47-9e75-a33beaa73a37 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85294297088 Type:vfs Inodes:41680320 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:90:12:e6 Speed:0 Mtu:1500} {Name:br-int MacAddress:4e:ec:11:72:80:3b Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:90:12:e6 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:7b:e8:f1 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:45:63:20 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f0:16:86 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:99:37:f8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:76:db:36:39:eb:6b Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:b6:dc:d9:26:03:d4 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:b2:61:4f:19:4c:e9 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.480948 3561 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.481049 3561 manager.go:233] Version: {KernelVersion:5.14.0-427.22.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 416.94.202406172220-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.482720 3561 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.483096 3561 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.483463 3561 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.483498 3561 container_manager_linux.go:304] "Creating device plugin manager" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.483705 3561 manager.go:136] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.483991 3561 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.484650 3561 state_mem.go:36] "Initialized new in-memory state store" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.484825 3561 server.go:1227] "Using root directory" path="/var/lib/kubelet" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.485896 3561 kubelet.go:406] "Attempting to sync node with API server" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.485975 3561 kubelet.go:311] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.486022 3561 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.486061 3561 kubelet.go:322] "Adding apiserver pod source" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.486359 3561 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.488821 3561 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="cri-o" version="1.29.5-5.rhaos4.16.git7032128.el9" apiVersion="v1" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.490095 3561 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.491153 3561 kubelet.go:826] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.491290 3561 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.491361 3561 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:41 crc kubenswrapper[3561]: E1203 00:06:41.491495 3561 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:41 crc kubenswrapper[3561]: E1203 00:06:41.491430 3561 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.491869 3561 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.491933 3561 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.491961 3561 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.492000 3561 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.492020 3561 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.492045 3561 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.492061 3561 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.492078 3561 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.492097 3561 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.492113 3561 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/cephfs" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.492139 3561 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.492156 3561 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.492181 3561 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.492238 3561 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.492272 3561 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.492691 3561 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.493434 3561 server.go:1262] "Started kubelet" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.493796 3561 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.493909 3561 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.494151 3561 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.494836 3561 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 00:06:41 crc systemd[1]: Started Kubernetes Kubelet. Dec 03 00:06:41 crc kubenswrapper[3561]: E1203 00:06:41.496075 3561 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.159:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187d8bdaf2e5bef9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:06:41.493376761 +0000 UTC m=+0.273811089,LastTimestamp:2025-12-03 00:06:41.493376761 +0000 UTC m=+0.273811089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.497508 3561 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.497597 3561 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.497747 3561 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-03-26 06:07:38.028493191 +0000 UTC Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.497806 3561 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 2718h0m56.530692288s for next certificate rotation Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.497839 3561 volume_manager.go:289] "The desired_state_of_world populator starts" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.497879 3561 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.498180 3561 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.499403 3561 server.go:461] "Adding debug handlers to kubelet server" Dec 03 00:06:41 crc kubenswrapper[3561]: E1203 00:06:41.510358 3561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="200ms" Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.510280 3561 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:41 crc kubenswrapper[3561]: E1203 00:06:41.510708 3561 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.531953 3561 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.531990 3561 factory.go:55] Registering systemd factory Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.532000 3561 factory.go:221] Registration of the systemd container factory successfully Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.533260 3561 factory.go:153] Registering CRI-O factory Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.533307 3561 factory.go:221] Registration of the crio container factory successfully Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.533352 3561 factory.go:103] Registering Raw factory Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.533386 3561 manager.go:1196] Started watching for new ooms in manager Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.535143 3561 manager.go:319] Starting recovery of all containers Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.566160 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.566237 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.566278 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.566345 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.566433 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.566472 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.566508 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.566575 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="12e733dd-0939-4f1b-9cbb-13897e093787" volumeName="kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.566617 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.566653 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568008 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568070 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568159 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568212 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568260 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568309 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568386 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568435 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568469 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568511 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf1a8966-f594-490a-9fbb-eec5bafd13d3" volumeName="kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568585 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568610 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568683 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568707 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568737 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568799 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568825 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568885 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568930 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6268b7fe-8910-4505-b404-6f1df638105c" volumeName="kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568954 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568986 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.568799 3561 manager.go:324] Recovery completed Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569009 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569074 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569099 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569122 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569144 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569167 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569191 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569213 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569237 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569292 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569317 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569347 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569378 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569409 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569434 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569457 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569563 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569680 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569727 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.569753 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570149 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570238 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570262 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570360 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570394 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570432 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570456 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570479 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570504 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570526 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570603 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570637 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570662 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570692 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570716 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570796 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570834 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570859 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570924 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570956 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.570978 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571001 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571024 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571045 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571068 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571090 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a48baf-1bee-4921-8bb2-9b7320e76f79" volumeName="kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571134 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571189 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571225 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571287 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571362 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571398 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571433 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571478 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571523 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571627 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571662 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571690 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571738 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571771 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571805 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571834 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a23c0ee-5648-448c-b772-83dced2891ce" volumeName="kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.571869 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573047 3561 reconstruct_new.go:149] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ea5f9a7192af1960ec8c50a86fd2d9a756dbf85695798868f611e04a03ec009/globalmount" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573125 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573216 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573273 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573301 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573347 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573373 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573411 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573469 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573501 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573580 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573606 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573630 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573653 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573678 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573701 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573736 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573759 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573784 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573833 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573865 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573889 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573915 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573939 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573972 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.573997 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574019 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574042 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574065 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574096 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574138 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574171 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574212 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574241 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574265 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574289 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574321 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574345 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574368 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574392 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574415 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574441 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574462 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574486 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574508 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574578 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574603 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574625 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574651 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574673 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574706 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574737 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574761 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574785 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574807 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574830 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574854 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574877 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574919 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574956 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.574986 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575009 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575031 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575054 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575076 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575100 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575126 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575154 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575185 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575219 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575254 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575298 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575325 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575348 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575370 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575394 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575416 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575439 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575471 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575495 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575520 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575571 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575597 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575621 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575652 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575685 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5d722a-1123-4935-9740-52a08d018bc9" volumeName="kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575708 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575732 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575755 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575780 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575802 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575826 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575847 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575886 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575932 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.575975 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576001 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576024 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576047 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576080 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576113 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576139 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576190 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576230 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576263 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576289 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576321 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576347 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576373 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576397 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576422 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576446 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576472 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576496 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576519 3561 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0f40333-c860-4c04-8058-a0bf572dcf12" volumeName="kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp" seLinuxMountContext="" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576728 3561 reconstruct_new.go:102] "Volume reconstruction finished" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.576752 3561 reconciler_new.go:29] "Reconciler: start to sync state" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.591886 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.593639 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.593694 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.593713 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.595265 3561 cpu_manager.go:215] "Starting CPU manager" policy="none" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.595302 3561 cpu_manager.go:216] "Reconciling" reconcilePeriod="10s" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.595332 3561 state_mem.go:36] "Initialized new in-memory state store" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.597804 3561 policy_none.go:49] "None policy: Start" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.598027 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.598832 3561 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.598876 3561 state_mem.go:35] "Initializing new in-memory state store" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.599436 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.599475 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.599488 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.599512 3561 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:06:41 crc kubenswrapper[3561]: E1203 00:06:41.601001 3561 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.159:6443: connect: connection refused" node="crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.660406 3561 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.662573 3561 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.663063 3561 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.663122 3561 kubelet.go:2343] "Starting kubelet main sync loop" Dec 03 00:06:41 crc kubenswrapper[3561]: E1203 00:06:41.663220 3561 kubelet.go:2367] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.668175 3561 manager.go:296] "Starting Device Plugin manager" Dec 03 00:06:41 crc kubenswrapper[3561]: W1203 00:06:41.668170 3561 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:41 crc kubenswrapper[3561]: E1203 00:06:41.668224 3561 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.668238 3561 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.668258 3561 server.go:79] "Starting device plugin registration server" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.668737 3561 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.668814 3561 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.668822 3561 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 03 00:06:41 crc kubenswrapper[3561]: E1203 00:06:41.711889 3561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="400ms" Dec 03 00:06:41 crc kubenswrapper[3561]: E1203 00:06:41.737426 3561 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.763840 3561 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.763944 3561 topology_manager.go:215] "Topology Admit Handler" podUID="b2a6a3b2ca08062d24afa4c01aaf9e4f" podNamespace="openshift-etcd" podName="etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.764044 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.766432 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.766587 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.766624 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.766983 3561 topology_manager.go:215] "Topology Admit Handler" podUID="ae85115fdc231b4002b57317b41a6400" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.767120 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.767218 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.767297 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.768930 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.769027 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.769039 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.769057 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.769067 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.769079 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.769205 3561 topology_manager.go:215] "Topology Admit Handler" podUID="bd6a3a59e513625ca0ae3724df2686bc" podNamespace="openshift-kube-controller-manager" podName="kube-controller-manager-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.769250 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.769460 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.769581 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.770006 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.770040 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.770050 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.770204 3561 topology_manager.go:215] "Topology Admit Handler" podUID="6a57a7fb1944b43a6bd11a349520d301" podNamespace="openshift-kube-scheduler" podName="openshift-kube-scheduler-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.770243 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.770411 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.770462 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.771040 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.771063 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.771071 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.771153 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.771209 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.771237 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.771415 3561 topology_manager.go:215] "Topology Admit Handler" podUID="d3ae206906481b4831fd849b559269c8" podNamespace="openshift-machine-config-operator" podName="kube-rbac-proxy-crio-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.771479 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.771609 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.771664 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.771748 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.771824 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.771917 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.772896 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.772918 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.772926 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.773035 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.773053 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.773764 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.773977 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.774174 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.774502 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.774584 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.774620 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.801882 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.803726 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.803846 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.803933 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.804027 3561 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:06:41 crc kubenswrapper[3561]: E1203 00:06:41.806238 3561 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.159:6443: connect: connection refused" node="crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.883034 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.883103 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.883142 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.883179 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.883220 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.883320 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.883383 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.883427 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.883490 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.883563 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.883601 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.883639 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.883679 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.883714 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.883751 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.985610 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.985683 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.985722 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.985759 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.985801 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.985827 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.985873 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.985944 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.985971 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.985880 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.985841 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986152 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986190 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986228 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986245 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986264 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986290 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986301 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986368 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986402 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986445 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986482 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986519 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986621 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986635 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986639 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986698 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986704 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986764 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:41 crc kubenswrapper[3561]: I1203 00:06:41.986830 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:42 crc kubenswrapper[3561]: I1203 00:06:42.113192 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 00:06:42 crc kubenswrapper[3561]: E1203 00:06:42.114285 3561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="800ms" Dec 03 00:06:42 crc kubenswrapper[3561]: I1203 00:06:42.129411 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:42 crc kubenswrapper[3561]: W1203 00:06:42.142710 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2a6a3b2ca08062d24afa4c01aaf9e4f.slice/crio-0a2de7d1b95ecca1fea98e3cbecce83f6b0b2484df62e112ef9082a4e4ea9f1a WatchSource:0}: Error finding container 0a2de7d1b95ecca1fea98e3cbecce83f6b0b2484df62e112ef9082a4e4ea9f1a: Status 404 returned error can't find the container with id 0a2de7d1b95ecca1fea98e3cbecce83f6b0b2484df62e112ef9082a4e4ea9f1a Dec 03 00:06:42 crc kubenswrapper[3561]: I1203 00:06:42.147931 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:42 crc kubenswrapper[3561]: W1203 00:06:42.148236 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae85115fdc231b4002b57317b41a6400.slice/crio-23ac66047d346d4d7024c35b05228652acbf32ac7b2dc7f7b4dbba85754e4de7 WatchSource:0}: Error finding container 23ac66047d346d4d7024c35b05228652acbf32ac7b2dc7f7b4dbba85754e4de7: Status 404 returned error can't find the container with id 23ac66047d346d4d7024c35b05228652acbf32ac7b2dc7f7b4dbba85754e4de7 Dec 03 00:06:42 crc kubenswrapper[3561]: I1203 00:06:42.169260 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:42 crc kubenswrapper[3561]: I1203 00:06:42.180264 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:42 crc kubenswrapper[3561]: W1203 00:06:42.187770 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3ae206906481b4831fd849b559269c8.slice/crio-5ffc4c078e195152d2407640582b9179b4fd9d276cb543b77436bcce63db0b31 WatchSource:0}: Error finding container 5ffc4c078e195152d2407640582b9179b4fd9d276cb543b77436bcce63db0b31: Status 404 returned error can't find the container with id 5ffc4c078e195152d2407640582b9179b4fd9d276cb543b77436bcce63db0b31 Dec 03 00:06:42 crc kubenswrapper[3561]: I1203 00:06:42.207351 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:42 crc kubenswrapper[3561]: I1203 00:06:42.209694 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[3561]: I1203 00:06:42.209757 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[3561]: I1203 00:06:42.209787 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[3561]: I1203 00:06:42.209834 3561 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:06:42 crc kubenswrapper[3561]: E1203 00:06:42.211037 3561 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.159:6443: connect: connection refused" node="crc" Dec 03 00:06:42 crc kubenswrapper[3561]: W1203 00:06:42.221898 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a57a7fb1944b43a6bd11a349520d301.slice/crio-431d7a780fc8fc08ed0c86c88806ab6a598b88db5185c356704d2417625ac2b7 WatchSource:0}: Error finding container 431d7a780fc8fc08ed0c86c88806ab6a598b88db5185c356704d2417625ac2b7: Status 404 returned error can't find the container with id 431d7a780fc8fc08ed0c86c88806ab6a598b88db5185c356704d2417625ac2b7 Dec 03 00:06:42 crc kubenswrapper[3561]: I1203 00:06:42.495811 3561 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:42 crc kubenswrapper[3561]: W1203 00:06:42.589750 3561 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:42 crc kubenswrapper[3561]: E1203 00:06:42.589821 3561 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:42 crc kubenswrapper[3561]: I1203 00:06:42.672404 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"5ffc4c078e195152d2407640582b9179b4fd9d276cb543b77436bcce63db0b31"} Dec 03 00:06:42 crc kubenswrapper[3561]: I1203 00:06:42.674115 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"66d319bb0bf9aaa7612810fdc05c0a782ea83bd56a4a33ca622a0322830506b8"} Dec 03 00:06:42 crc kubenswrapper[3561]: I1203 00:06:42.675678 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"23ac66047d346d4d7024c35b05228652acbf32ac7b2dc7f7b4dbba85754e4de7"} Dec 03 00:06:42 crc kubenswrapper[3561]: I1203 00:06:42.676742 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"0a2de7d1b95ecca1fea98e3cbecce83f6b0b2484df62e112ef9082a4e4ea9f1a"} Dec 03 00:06:42 crc kubenswrapper[3561]: I1203 00:06:42.677828 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"431d7a780fc8fc08ed0c86c88806ab6a598b88db5185c356704d2417625ac2b7"} Dec 03 00:06:42 crc kubenswrapper[3561]: E1203 00:06:42.915907 3561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="1.6s" Dec 03 00:06:42 crc kubenswrapper[3561]: W1203 00:06:42.925273 3561 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:42 crc kubenswrapper[3561]: E1203 00:06:42.925388 3561 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.012128 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.014051 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.014109 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.014130 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.014170 3561 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:06:43 crc kubenswrapper[3561]: E1203 00:06:43.015256 3561 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.159:6443: connect: connection refused" node="crc" Dec 03 00:06:43 crc kubenswrapper[3561]: W1203 00:06:43.107920 3561 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:43 crc kubenswrapper[3561]: E1203 00:06:43.107998 3561 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:43 crc kubenswrapper[3561]: W1203 00:06:43.206576 3561 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:43 crc kubenswrapper[3561]: E1203 00:06:43.206633 3561 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.495211 3561 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.681737 3561 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="299136b4947012b9172489c064874bf7603c2d89776eb9145340e858fe47c952" exitCode=0 Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.681886 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerDied","Data":"299136b4947012b9172489c064874bf7603c2d89776eb9145340e858fe47c952"} Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.681911 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.683586 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.683624 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"3687282c2f2ca81897c70f48cdba7f5db4e27c5539c8d2b3ca4b0287e477f56c"} Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.683626 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.683695 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.683584 3561 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="3687282c2f2ca81897c70f48cdba7f5db4e27c5539c8d2b3ca4b0287e477f56c" exitCode=0 Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.683726 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.685484 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.685570 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.685572 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.685703 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.686352 3561 generic.go:334] "Generic (PLEG): container finished" podID="6a57a7fb1944b43a6bd11a349520d301" containerID="da1f3eb2af90dd4ae994c3f81b186fb10a467806cd3706e8edeab9de547eb345" exitCode=0 Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.686400 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.686417 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerDied","Data":"da1f3eb2af90dd4ae994c3f81b186fb10a467806cd3706e8edeab9de547eb345"} Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.686588 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.686636 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.686662 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.690813 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.690845 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.690855 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.692774 3561 generic.go:334] "Generic (PLEG): container finished" podID="d3ae206906481b4831fd849b559269c8" containerID="e6485be1e489f3639dea8a99d8c28a92ae0a26771b10eb70e93f2e898701f49e" exitCode=0 Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.692847 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerDied","Data":"e6485be1e489f3639dea8a99d8c28a92ae0a26771b10eb70e93f2e898701f49e"} Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.692869 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.694992 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.695035 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.695052 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.698044 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"3170adb4d964bb1b0d4fcefac2050bb117aeab3fbaf35e07671fe5c034d5cf00"} Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.698094 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"1ad4760bdc75e62c8568812bcaa24c26f3aef7bfab41f02f3d71e575097b33e1"} Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.698118 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"7f71563717f644cd0a18392cb754866e3a2feeed434f09fb5b2546616cbfb3ab"} Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.698137 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"6ae381cd61e4203c3136039f3e986bbafe0ddf088d1bfbfc9fa95998167176b9"} Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.698229 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.699566 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.699596 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[3561]: I1203 00:06:43.699610 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.495615 3561 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 38.102.83.159:6443: connect: connection refused Dec 03 00:06:44 crc kubenswrapper[3561]: E1203 00:06:44.518020 3561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="3.2s" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.615955 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.617182 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.617218 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.617230 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.617256 3561 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:06:44 crc kubenswrapper[3561]: E1203 00:06:44.617958 3561 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.159:6443: connect: connection refused" node="crc" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.706996 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"2f93921dc55acd4e6010ef0f3772ca349a6cb8580c58893b8a87e68a5071fe81"} Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.707041 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"2a466ae766128033434f8ad5f25e75a88fcb12691227ede54b415d0316e3e1d1"} Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.709087 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.709565 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"ce6341e153ef0c67912e90a0b7692c83762f46b850aa6ae1295493ecd5d38961"} Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.711406 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.711443 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.711456 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.713975 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"f978dba5e3e5c0aff07b26d0a8059f01e5fd7ca22a8eef0dd99560149ac353d9"} Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.714003 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"7b44fa73d3f68543213d024b92ab7ce7fb7d65d0107f504404461c11722595b2"} Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.716573 3561 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="7df873f5567fc3299275c7a27c8a0994e34849d68f9e3871d7dd4ff67182bcc4" exitCode=0 Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.716673 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.717163 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.717482 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"7df873f5567fc3299275c7a27c8a0994e34849d68f9e3871d7dd4ff67182bcc4"} Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.717953 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.717978 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.717991 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.720014 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.720039 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[3561]: I1203 00:06:44.720050 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.722393 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"5a004e668c7d85df7568dd1d9ed5860aabc433c48812a25dd28950e163264d75"} Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.722746 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"9638132d1625507bf3df34a9f12230e8d2de16528f88e84fe4b9b664929bfef3"} Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.722762 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"a96615dc8e87c2cef2cf079ec058cfe28877ea716ecf26bb099234d80853ff0a"} Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.722625 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.724263 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.724325 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.724343 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.725441 3561 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="d5164271fb92cac86e96e1e9b808f31b2d6e015504ce0df9f212f8ec6ec30f3d" exitCode=0 Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.725495 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"d5164271fb92cac86e96e1e9b808f31b2d6e015504ce0df9f212f8ec6ec30f3d"} Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.725610 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.726864 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.726909 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.726924 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.728871 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"2c7ef4400edc2adf48253725945c8e70150599ec81c6e14eab0fd8538e1e6f99"} Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.728915 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.728926 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.729859 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.729894 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.729909 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.730040 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.730063 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.730072 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[3561]: I1203 00:06:45.904448 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:46 crc kubenswrapper[3561]: I1203 00:06:46.267218 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:46 crc kubenswrapper[3561]: I1203 00:06:46.267372 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:46 crc kubenswrapper[3561]: I1203 00:06:46.268465 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[3561]: I1203 00:06:46.268507 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[3561]: I1203 00:06:46.268517 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[3561]: I1203 00:06:46.686069 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:46 crc kubenswrapper[3561]: I1203 00:06:46.734841 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:46 crc kubenswrapper[3561]: I1203 00:06:46.734956 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"b760c695e7b8d3b262f04daa7a579fb228f4e1fba51fb41f3c911344215f5864"} Dec 03 00:06:46 crc kubenswrapper[3561]: I1203 00:06:46.735049 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"58bf291296d90004ce1675b9fa94da22f32b2b341dd1e9677056090525d91beb"} Dec 03 00:06:46 crc kubenswrapper[3561]: I1203 00:06:46.735067 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:46 crc kubenswrapper[3561]: I1203 00:06:46.735217 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:46 crc kubenswrapper[3561]: I1203 00:06:46.736711 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[3561]: I1203 00:06:46.736791 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[3561]: I1203 00:06:46.736819 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[3561]: I1203 00:06:46.736730 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[3561]: I1203 00:06:46.736863 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[3561]: I1203 00:06:46.736872 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.743828 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"76c8ec04c94ffc96c251b87a684e50a3368f4910a2a6466207d7c8611931532b"} Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.743893 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"33e826ced05ba3c8ba4954263716756b01c01d91e16ff0add1ae912a03b99218"} Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.743903 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.743965 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.743903 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.745279 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.745348 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.745283 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.745374 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.745432 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.745460 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.745705 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.745736 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.745752 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.818384 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.820184 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.820234 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.820247 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[3561]: I1203 00:06:47.820276 3561 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:06:48 crc kubenswrapper[3561]: I1203 00:06:48.049043 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:48 crc kubenswrapper[3561]: I1203 00:06:48.049272 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:48 crc kubenswrapper[3561]: I1203 00:06:48.050572 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[3561]: I1203 00:06:48.050638 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[3561]: I1203 00:06:48.050655 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[3561]: I1203 00:06:48.580376 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:48 crc kubenswrapper[3561]: I1203 00:06:48.748918 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:48 crc kubenswrapper[3561]: I1203 00:06:48.748917 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:48 crc kubenswrapper[3561]: I1203 00:06:48.750836 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[3561]: I1203 00:06:48.750857 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[3561]: I1203 00:06:48.750890 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[3561]: I1203 00:06:48.750899 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[3561]: I1203 00:06:48.750911 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[3561]: I1203 00:06:48.750920 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[3561]: I1203 00:06:49.147614 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:49 crc kubenswrapper[3561]: I1203 00:06:49.147790 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:49 crc kubenswrapper[3561]: I1203 00:06:49.149129 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[3561]: I1203 00:06:49.149162 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[3561]: I1203 00:06:49.149174 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[3561]: I1203 00:06:49.951120 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 03 00:06:49 crc kubenswrapper[3561]: I1203 00:06:49.951370 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:49 crc kubenswrapper[3561]: I1203 00:06:49.953203 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[3561]: I1203 00:06:49.953270 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[3561]: I1203 00:06:49.953323 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[3561]: I1203 00:06:51.026266 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:51 crc kubenswrapper[3561]: I1203 00:06:51.026454 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:51 crc kubenswrapper[3561]: I1203 00:06:51.028017 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[3561]: I1203 00:06:51.028050 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[3561]: I1203 00:06:51.028097 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[3561]: I1203 00:06:51.041740 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:51 crc kubenswrapper[3561]: I1203 00:06:51.049229 3561 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:06:51 crc kubenswrapper[3561]: I1203 00:06:51.049467 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 00:06:51 crc kubenswrapper[3561]: I1203 00:06:51.349414 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 03 00:06:51 crc kubenswrapper[3561]: I1203 00:06:51.349762 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:51 crc kubenswrapper[3561]: I1203 00:06:51.350976 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[3561]: I1203 00:06:51.351027 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[3561]: I1203 00:06:51.351055 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[3561]: E1203 00:06:51.738259 3561 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:06:51 crc kubenswrapper[3561]: I1203 00:06:51.761736 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:51 crc kubenswrapper[3561]: I1203 00:06:51.763302 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[3561]: I1203 00:06:51.763347 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[3561]: I1203 00:06:51.763371 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[3561]: W1203 00:06:54.983183 3561 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 03 00:06:54 crc kubenswrapper[3561]: I1203 00:06:54.983375 3561 trace.go:236] Trace[806686178]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (03-Dec-2025 00:06:44.950) (total time: 10033ms): Dec 03 00:06:54 crc kubenswrapper[3561]: Trace[806686178]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10032ms (00:06:54.983) Dec 03 00:06:54 crc kubenswrapper[3561]: Trace[806686178]: [10.033051123s] [10.033051123s] END Dec 03 00:06:54 crc kubenswrapper[3561]: E1203 00:06:54.983469 3561 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 03 00:06:55 crc kubenswrapper[3561]: W1203 00:06:55.056188 3561 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 03 00:06:55 crc kubenswrapper[3561]: I1203 00:06:55.056332 3561 trace.go:236] Trace[1625356963]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (03-Dec-2025 00:06:45.054) (total time: 10002ms): Dec 03 00:06:55 crc kubenswrapper[3561]: Trace[1625356963]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:06:55.056) Dec 03 00:06:55 crc kubenswrapper[3561]: Trace[1625356963]: [10.002012648s] [10.002012648s] END Dec 03 00:06:55 crc kubenswrapper[3561]: E1203 00:06:55.056359 3561 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 03 00:06:55 crc kubenswrapper[3561]: W1203 00:06:55.312647 3561 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 03 00:06:55 crc kubenswrapper[3561]: I1203 00:06:55.312764 3561 trace.go:236] Trace[1828351869]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (03-Dec-2025 00:06:45.310) (total time: 10001ms): Dec 03 00:06:55 crc kubenswrapper[3561]: Trace[1828351869]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:06:55.312) Dec 03 00:06:55 crc kubenswrapper[3561]: Trace[1828351869]: [10.001921466s] [10.001921466s] END Dec 03 00:06:55 crc kubenswrapper[3561]: E1203 00:06:55.312786 3561 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 03 00:06:55 crc kubenswrapper[3561]: I1203 00:06:55.499126 3561 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": net/http: TLS handshake timeout Dec 03 00:06:55 crc kubenswrapper[3561]: I1203 00:06:55.738906 3561 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 03 00:06:55 crc kubenswrapper[3561]: I1203 00:06:55.739057 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 03 00:06:55 crc kubenswrapper[3561]: W1203 00:06:55.887074 3561 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 03 00:06:55 crc kubenswrapper[3561]: I1203 00:06:55.887227 3561 trace.go:236] Trace[1485097774]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (03-Dec-2025 00:06:45.883) (total time: 10003ms): Dec 03 00:06:55 crc kubenswrapper[3561]: Trace[1485097774]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10003ms (00:06:55.887) Dec 03 00:06:55 crc kubenswrapper[3561]: Trace[1485097774]: [10.003743908s] [10.003743908s] END Dec 03 00:06:55 crc kubenswrapper[3561]: E1203 00:06:55.887264 3561 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 03 00:06:55 crc kubenswrapper[3561]: I1203 00:06:55.905763 3561 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 03 00:06:55 crc kubenswrapper[3561]: I1203 00:06:55.905840 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 03 00:06:56 crc kubenswrapper[3561]: I1203 00:06:56.184556 3561 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Dec 03 00:06:56 crc kubenswrapper[3561]: I1203 00:06:56.184614 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 00:06:56 crc kubenswrapper[3561]: I1203 00:06:56.193933 3561 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Dec 03 00:06:56 crc kubenswrapper[3561]: I1203 00:06:56.194059 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 00:06:56 crc kubenswrapper[3561]: I1203 00:06:56.302723 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:56 crc kubenswrapper[3561]: I1203 00:06:56.302820 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:56 crc kubenswrapper[3561]: I1203 00:06:56.303757 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[3561]: I1203 00:06:56.303800 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[3561]: I1203 00:06:56.303810 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[3561]: I1203 00:06:58.585707 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:58 crc kubenswrapper[3561]: I1203 00:06:58.585980 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:58 crc kubenswrapper[3561]: I1203 00:06:58.587783 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[3561]: I1203 00:06:58.587850 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[3561]: I1203 00:06:58.587870 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[3561]: I1203 00:06:58.683932 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:58 crc kubenswrapper[3561]: I1203 00:06:58.782097 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:58 crc kubenswrapper[3561]: I1203 00:06:58.783039 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[3561]: I1203 00:06:58.783159 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[3561]: I1203 00:06:58.783185 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[3561]: I1203 00:06:58.786186 3561 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.498913 3561 apiserver.go:52] "Watching apiserver" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.523704 3561 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.525891 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv","openshift-marketplace/marketplace-operator-8b455464d-f9xdt","openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf","hostpath-provisioner/csi-hostpathplugin-hvm8g","openshift-controller-manager/controller-manager-778975cc4f-x5vcf","openshift-dns/node-resolver-dn27q","openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46","openshift-image-registry/image-registry-75779c45fd-v2j2v","openshift-network-operator/network-operator-767c585db5-zd56b","openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z","openshift-ingress/router-default-5c9bf7bc58-6jctv","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-scheduler/installer-8-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8","openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9","openshift-console/console-644bb77b49-5x5xk","openshift-multus/multus-additional-cni-plugins-bzj2p","openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb","openshift-machine-config-operator/machine-config-daemon-zpnhg","openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm","openshift-machine-config-operator/machine-config-server-v65wr","openshift-multus/multus-q88th","openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t","openshift-kube-apiserver/installer-12-crc","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr","openshift-network-operator/iptables-alerter-wwpnd","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-marketplace/redhat-operators-f4jkp","openshift-marketplace/community-operators-8jhz6","openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7","openshift-network-diagnostics/network-check-target-v54bt","openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2","openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b","openshift-ingress-canary/ingress-canary-2vhcn","openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb","openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7","openshift-console/downloads-65476884b9-9wcvx","openshift-etcd-operator/etcd-operator-768d5b5d86-722mg","openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7","openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd","openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh","openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2","openshift-service-ca/service-ca-666f99b6f-kk8kg","openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg","openshift-dns/dns-default-gbw49","openshift-kube-controller-manager/revision-pruner-9-crc","openshift-kube-scheduler/installer-7-crc","openshift-multus/network-metrics-daemon-qdfr4","openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz","openshift-image-registry/node-ca-l92hr","openshift-kube-apiserver/installer-9-crc","openshift-kube-controller-manager/revision-pruner-11-crc","openshift-marketplace/community-operators-sdddl","openshift-multus/multus-admission-controller-6c7c885997-4hbbc","openshift-network-node-identity/network-node-identity-7xghp","openshift-console-operator/console-conversion-webhook-595f9969b-l6z49","openshift-etcd/etcd-crc","openshift-kube-controller-manager/installer-10-crc","openshift-dns-operator/dns-operator-75f687757b-nz2xb","openshift-kube-controller-manager/installer-10-retry-1-crc","openshift-ovn-kubernetes/ovnkube-node-44qcg","openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc","openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs","openshift-marketplace/certified-operators-7287f","openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw","openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j","openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-marketplace/redhat-marketplace-8s8pc","openshift-apiserver/apiserver-7fc54b8dd7-d2bhp","openshift-console-operator/console-operator-5dbbc74dc9-cp5cd","openshift-kube-controller-manager/revision-pruner-10-crc","openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh","openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd","openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz","openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m","openshift-kube-controller-manager/installer-11-crc","openshift-kube-controller-manager/revision-pruner-8-crc"] Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.526007 3561 topology_manager.go:215] "Topology Admit Handler" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" podNamespace="openshift-machine-api" podName="machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.526162 3561 topology_manager.go:215] "Topology Admit Handler" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" podNamespace="openshift-operator-lifecycle-manager" podName="package-server-manager-84d578d794-jw7r2" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.526236 3561 topology_manager.go:215] "Topology Admit Handler" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" podNamespace="openshift-operator-lifecycle-manager" podName="catalog-operator-857456c46-7f5wf" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.526325 3561 topology_manager.go:215] "Topology Admit Handler" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" podNamespace="openshift-machine-config-operator" podName="machine-config-operator-76788bff89-wkjgm" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.526399 3561 topology_manager.go:215] "Topology Admit Handler" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" podNamespace="openshift-service-ca-operator" podName="service-ca-operator-546b4f8984-pwccz" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.526441 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.526467 3561 topology_manager.go:215] "Topology Admit Handler" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" podNamespace="openshift-marketplace" podName="marketplace-operator-8b455464d-f9xdt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.526522 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.526572 3561 topology_manager.go:215] "Topology Admit Handler" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" podNamespace="openshift-etcd-operator" podName="etcd-operator-768d5b5d86-722mg" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.526610 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.526645 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.526661 3561 topology_manager.go:215] "Topology Admit Handler" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" podNamespace="openshift-kube-apiserver-operator" podName="kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.526719 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.526787 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.526815 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.526819 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.526831 3561 topology_manager.go:215] "Topology Admit Handler" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" podNamespace="openshift-network-operator" podName="network-operator-767c585db5-zd56b" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.526882 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.526944 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.526969 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.526893 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.526817 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.527034 3561 topology_manager.go:215] "Topology Admit Handler" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" podNamespace="openshift-operator-lifecycle-manager" podName="olm-operator-6d8474f75f-x54mh" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.527044 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.527085 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.527125 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.527236 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.527239 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.527301 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.527408 3561 topology_manager.go:215] "Topology Admit Handler" podUID="71af81a9-7d43-49b2-9287-c375900aa905" podNamespace="openshift-kube-scheduler-operator" podName="openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.527574 3561 topology_manager.go:215] "Topology Admit Handler" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" podNamespace="openshift-kube-storage-version-migrator-operator" podName="kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.527676 3561 topology_manager.go:215] "Topology Admit Handler" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" podNamespace="openshift-authentication-operator" podName="authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.527775 3561 topology_manager.go:215] "Topology Admit Handler" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" podNamespace="openshift-config-operator" podName="openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.527878 3561 topology_manager.go:215] "Topology Admit Handler" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" podNamespace="openshift-machine-api" podName="control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.527971 3561 topology_manager.go:215] "Topology Admit Handler" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" podNamespace="openshift-kube-controller-manager-operator" podName="kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.528013 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.528036 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.528083 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.528089 3561 topology_manager.go:215] "Topology Admit Handler" podUID="10603adc-d495-423c-9459-4caa405960bb" podNamespace="openshift-dns-operator" podName="dns-operator-75f687757b-nz2xb" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.528087 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.528111 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.527977 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.528316 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.528320 3561 topology_manager.go:215] "Topology Admit Handler" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" podNamespace="openshift-controller-manager-operator" podName="openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.528377 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.528394 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.528424 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.528498 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.528616 3561 topology_manager.go:215] "Topology Admit Handler" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" podNamespace="openshift-apiserver-operator" podName="openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.528670 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.528722 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.528807 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.528821 3561 topology_manager.go:215] "Topology Admit Handler" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" podNamespace="openshift-image-registry" podName="cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.528613 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.528321 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.528972 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.529042 3561 topology_manager.go:215] "Topology Admit Handler" podUID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" podNamespace="openshift-multus" podName="multus-additional-cni-plugins-bzj2p" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.529073 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.529123 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.529172 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.529364 3561 topology_manager.go:215] "Topology Admit Handler" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" podNamespace="openshift-multus" podName="multus-q88th" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.529510 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.529678 3561 topology_manager.go:215] "Topology Admit Handler" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" podNamespace="openshift-multus" podName="network-metrics-daemon-qdfr4" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.529950 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q88th" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.530016 3561 topology_manager.go:215] "Topology Admit Handler" podUID="410cf605-1970-4691-9c95-53fdc123b1f3" podNamespace="openshift-ovn-kubernetes" podName="ovnkube-control-plane-77c846df58-6l97b" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.530097 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.530167 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.530311 3561 topology_manager.go:215] "Topology Admit Handler" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" podNamespace="openshift-network-diagnostics" podName="network-check-source-5c5478f8c-vqvt7" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.530493 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.530579 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.530606 3561 topology_manager.go:215] "Topology Admit Handler" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" podNamespace="openshift-network-diagnostics" podName="network-check-target-v54bt" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.530626 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.530866 3561 topology_manager.go:215] "Topology Admit Handler" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" podNamespace="openshift-network-node-identity" podName="network-node-identity-7xghp" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.531134 3561 topology_manager.go:215] "Topology Admit Handler" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" podNamespace="openshift-ovn-kubernetes" podName="ovnkube-node-44qcg" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.531393 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.531513 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.531585 3561 topology_manager.go:215] "Topology Admit Handler" podUID="2b6d14a5-ca00-40c7-af7a-051a98a24eed" podNamespace="openshift-network-operator" podName="iptables-alerter-wwpnd" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.531146 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.531955 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.532028 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.532045 3561 topology_manager.go:215] "Topology Admit Handler" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" podNamespace="openshift-kube-storage-version-migrator" podName="migrator-f7c6d88df-q2fnv" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.532135 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.532345 3561 topology_manager.go:215] "Topology Admit Handler" podUID="13045510-8717-4a71-ade4-be95a76440a7" podNamespace="openshift-dns" podName="dns-default-gbw49" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.532379 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.530982 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.532689 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.532703 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.532751 3561 topology_manager.go:215] "Topology Admit Handler" podUID="6a23c0ee-5648-448c-b772-83dced2891ce" podNamespace="openshift-dns" podName="node-resolver-dn27q" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.532846 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.533072 3561 topology_manager.go:215] "Topology Admit Handler" podUID="9fb762d1-812f-43f1-9eac-68034c1ecec7" podNamespace="openshift-cluster-version" podName="cluster-version-operator-6d5d9649f6-x6d46" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.533142 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.533164 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.533215 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.533453 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dn27q" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.533506 3561 topology_manager.go:215] "Topology Admit Handler" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" podNamespace="openshift-oauth-apiserver" podName="apiserver-69c565c9b6-vbdpd" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.533454 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.533940 3561 topology_manager.go:215] "Topology Admit Handler" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" podNamespace="openshift-operator-lifecycle-manager" podName="packageserver-8464bcc55b-sjnqz" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.534188 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.535557 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.534226 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.534248 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.534326 3561 topology_manager.go:215] "Topology Admit Handler" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" podNamespace="openshift-ingress-operator" podName="ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.535713 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.534392 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.535925 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.534434 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.535972 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.534492 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.536092 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.534581 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.536124 3561 topology_manager.go:215] "Topology Admit Handler" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" podNamespace="openshift-cluster-samples-operator" podName="cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.536134 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.534712 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.536237 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.536268 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.536338 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.536346 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.536403 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.536430 3561 topology_manager.go:215] "Topology Admit Handler" podUID="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" podNamespace="openshift-cluster-machine-approver" podName="machine-approver-7874c8775-kh4j9" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.537914 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.538107 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.538268 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.539029 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.539375 3561 topology_manager.go:215] "Topology Admit Handler" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" podNamespace="openshift-ingress" podName="router-default-5c9bf7bc58-6jctv" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.539491 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.539702 3561 topology_manager.go:215] "Topology Admit Handler" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" podNamespace="openshift-machine-config-operator" podName="machine-config-daemon-zpnhg" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.539157 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.540286 3561 topology_manager.go:215] "Topology Admit Handler" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" podNamespace="openshift-console-operator" podName="console-operator-5dbbc74dc9-cp5cd" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.540345 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.540583 3561 topology_manager.go:215] "Topology Admit Handler" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" podNamespace="openshift-console-operator" podName="console-conversion-webhook-595f9969b-l6z49" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.541060 3561 topology_manager.go:215] "Topology Admit Handler" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" podNamespace="openshift-machine-config-operator" podName="machine-config-controller-6df6df6b6b-58shh" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.541137 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.541201 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.541344 3561 topology_manager.go:215] "Topology Admit Handler" podUID="6268b7fe-8910-4505-b404-6f1df638105c" podNamespace="openshift-console" podName="downloads-65476884b9-9wcvx" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.541588 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.541679 3561 topology_manager.go:215] "Topology Admit Handler" podUID="bf1a8b70-3856-486f-9912-a2de1d57c3fb" podNamespace="openshift-machine-config-operator" podName="machine-config-server-v65wr" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.541717 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.542418 3561 topology_manager.go:215] "Topology Admit Handler" podUID="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" podNamespace="openshift-image-registry" podName="node-ca-l92hr" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.542440 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.542944 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.542989 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.543191 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.544298 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.544433 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.544483 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.544650 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v65wr" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.544897 3561 topology_manager.go:215] "Topology Admit Handler" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" podNamespace="openshift-ingress-canary" podName="ingress-canary-2vhcn" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.545018 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.545127 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.545240 3561 topology_manager.go:215] "Topology Admit Handler" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" podNamespace="openshift-multus" podName="multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.545556 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l92hr" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.545602 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.545612 3561 topology_manager.go:215] "Topology Admit Handler" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" podNamespace="hostpath-provisioner" podName="csi-hostpathplugin-hvm8g" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.545697 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.545793 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.545856 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.545872 3561 topology_manager.go:215] "Topology Admit Handler" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" podNamespace="openshift-marketplace" podName="certified-operators-7287f" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.546137 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.546390 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.546440 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.546459 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.546606 3561 topology_manager.go:215] "Topology Admit Handler" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" podNamespace="openshift-marketplace" podName="community-operators-8jhz6" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.546707 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.547903 3561 topology_manager.go:215] "Topology Admit Handler" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" podNamespace="openshift-marketplace" podName="redhat-marketplace-8s8pc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.548218 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.548248 3561 topology_manager.go:215] "Topology Admit Handler" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" podNamespace="openshift-marketplace" podName="redhat-operators-f4jkp" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.548322 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.548628 3561 topology_manager.go:215] "Topology Admit Handler" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-8-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.548966 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.549190 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.549399 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.549202 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.549093 3561 topology_manager.go:215] "Topology Admit Handler" podUID="e4a7de23-6134-4044-902a-0900dc04a501" podNamespace="openshift-service-ca" podName="service-ca-666f99b6f-kk8kg" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.550133 3561 topology_manager.go:215] "Topology Admit Handler" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251920-wcws2" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.550247 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.550333 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.550375 3561 topology_manager.go:215] "Topology Admit Handler" podUID="a0453d24-e872-43af-9e7a-86227c26d200" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-9-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.550520 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.550621 3561 topology_manager.go:215] "Topology Admit Handler" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" podNamespace="openshift-kube-apiserver" podName="installer-9-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.550656 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.550671 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.550742 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.550873 3561 topology_manager.go:215] "Topology Admit Handler" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" podNamespace="openshift-image-registry" podName="image-registry-75779c45fd-v2j2v" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.550937 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.550998 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.551070 3561 topology_manager.go:215] "Topology Admit Handler" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" podNamespace="openshift-kube-scheduler" podName="installer-7-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.551078 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.551096 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.551142 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.551269 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.551301 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.551324 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.551446 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.551672 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.551694 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.551753 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.551839 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.551935 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.552064 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.552081 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.552270 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.552305 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.551270 3561 topology_manager.go:215] "Topology Admit Handler" podUID="2f155735-a9be-4621-a5f2-5ab4b6957acd" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-10-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.552609 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.552653 3561 topology_manager.go:215] "Topology Admit Handler" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" podNamespace="openshift-authentication" podName="oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.552803 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.552884 3561 topology_manager.go:215] "Topology Admit Handler" podUID="79050916-d488-4806-b556-1b0078b31e53" podNamespace="openshift-kube-controller-manager" podName="installer-10-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.552990 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.553019 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.553190 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.553212 3561 topology_manager.go:215] "Topology Admit Handler" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" podNamespace="openshift-console" podName="console-644bb77b49-5x5xk" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.553267 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.553268 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.553356 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.553361 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.553469 3561 topology_manager.go:215] "Topology Admit Handler" podUID="dc02677d-deed-4cc9-bb8c-0dd300f83655" podNamespace="openshift-kube-controller-manager" podName="installer-10-retry-1-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.553698 3561 topology_manager.go:215] "Topology Admit Handler" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" podNamespace="openshift-apiserver" podName="apiserver-7fc54b8dd7-d2bhp" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.553716 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.553903 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.553972 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.554021 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.554048 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.554183 3561 topology_manager.go:215] "Topology Admit Handler" podUID="aca1f9ff-a685-4a78-b461-3931b757f754" podNamespace="openshift-kube-scheduler" podName="installer-8-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.554929 3561 topology_manager.go:215] "Topology Admit Handler" podUID="1784282a-268d-4e44-a766-43281414e2dc" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-11-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.555001 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.555367 3561 topology_manager.go:215] "Topology Admit Handler" podUID="a45bfab9-f78b-4d72-b5b7-903e60401124" podNamespace="openshift-kube-controller-manager" podName="installer-11-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.555474 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.555843 3561 topology_manager.go:215] "Topology Admit Handler" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" podNamespace="openshift-kube-apiserver" podName="installer-12-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.556111 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.556231 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.556328 3561 topology_manager.go:215] "Topology Admit Handler" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" podNamespace="openshift-controller-manager" podName="controller-manager-778975cc4f-x5vcf" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.556753 3561 topology_manager.go:215] "Topology Admit Handler" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-776b8b7477-sfpvs" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.556783 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.556928 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.557020 3561 topology_manager.go:215] "Topology Admit Handler" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251935-d7x6j" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.557123 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.557208 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.557896 3561 topology_manager.go:215] "Topology Admit Handler" podUID="ad171c4b-8408-4370-8e86-502999788ddb" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251950-x8jjd" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.557988 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.558516 3561 topology_manager.go:215] "Topology Admit Handler" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" podNamespace="openshift-marketplace" podName="community-operators-sdddl" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.558728 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.559804 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.632480 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6ce3128562801aa3c24e80d49667d2d193ade88fcdf509085e61d3d048041e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T20:05:34Z\\\",\\\"message\\\":\\\" Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125\\\\nI0813 19:59:36.141079 1 status.go:99] Syncing status: available\\\\nI0813 19:59:36.366889 1 status.go:69] Syncing status: re-syncing\\\\nI0813 19:59:36.405968 1 sync.go:75] Provider is NoOp, skipping synchronisation\\\\nI0813 19:59:36.451686 1 status.go:99] Syncing status: available\\\\nE0813 20:01:53.428030 1 leaderelection.go:369] Failed to update lock: Operation cannot be fulfilled on leases.coordination.k8s.io \\\\\\\"machine-api-operator\\\\\\\": the object has been modified; please apply your changes to the latest version and try again\\\\nE0813 20:02:53.432992 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get \\\\\\\"https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nE0813 20:03:53.443054 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get \\\\\\\"https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nE0813 20:04:53.434088 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get \\\\\\\"https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nI0813 20:05:34.050754 1 leaderelection.go:285] failed to renew lease openshift-machine-api/machine-api-operator: timed out waiting for the condition\\\\nE0813 20:05:34.147127 1 leaderelection.go:308] Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io \\\\\\\"machine-api-operator\\\\\\\": the object has been modified; please apply your changes to the latest version and try again\\\\nF0813 20:05:34.165368 1 start.go:104] Leader election lost\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:12Z\\\"}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: E1203 00:06:59.632662 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.644385 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.656982 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.673624 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.706738 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.717355 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.724722 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/installer-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aca1f9ff-a685-4a78-b461-3931b757f754\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler\"/\"installer-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.736235 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3e81c3-c292-4130-9436-f94062c91efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-778975cc4f-x5vcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.747636 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c4363bf35c3850ea69697df9035284b39acfc987f5b168c9bf3f20002f44039\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:00:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:06Z\\\"}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.761451 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.776020 3561 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.784484 3561 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.789124 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.807050 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.829336 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01feb2e0-a0f4-4573-8335-34e364e0ef40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-74fc7c67cc-xqf8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.846448 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47802e2c3506925156013fb9ab1b2e35c0b10d40b6540eabeb02eed57b691069\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T20:00:35Z\\\",\\\"message\\\":\\\"\\\\nI0813 20:00:35.377018 1 genericapiserver.go:701] [graceful-termination] apiserver is exiting\\\\nI0813 20:00:35.377039 1 builder.go:302] server exited\\\\nI0813 20:00:35.377111 1 base_controller.go:114] Shutting down worker of KubeStorageVersionMigrator controller ...\\\\nI0813 20:00:35.377129 1 base_controller.go:104] All KubeStorageVersionMigrator workers have been terminated\\\\nI0813 20:00:35.377162 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ...\\\\nI0813 20:00:35.377182 1 base_controller.go:172] Shutting down KubeStorageVersionMigratorStaticResources ...\\\\nI0813 20:00:35.377194 1 base_controller.go:172] Shutting down LoggingSyncer ...\\\\nI0813 20:00:35.377277 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ...\\\\nI0813 20:00:35.377284 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated\\\\nI0813 20:00:35.377292 1 base_controller.go:114] Shutting down worker of KubeStorageVersionMigratorStaticResources controller ...\\\\nI0813 20:00:35.377298 1 base_controller.go:104] All KubeStorageVersionMigratorStaticResources workers have been terminated\\\\nI0813 20:00:35.377307 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\\\\nI0813 20:00:35.377314 1 base_controller.go:104] All LoggingSyncer workers have been terminated\\\\nI0813 20:00:35.377334 1 base_controller.go:114] Shutting down worker of StatusSyncer_kube-storage-version-migrator controller ...\\\\nI0813 20:00:35.378324 1 base_controller.go:172] Shutting down StatusSyncer_kube-storage-version-migrator ...\\\\nI0813 20:00:35.378427 1 base_controller.go:150] All StatusSyncer_kube-storage-version-migrator post start hooks have been terminated\\\\nI0813 20:00:35.378437 1 base_controller.go:104] All StatusSyncer_kube-storage-version-migrator workers have been terminated\\\\nW0813 20:00:35.381309 1 builder.go:109] graceful termination failed, controllers failed with error: stopped\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:17Z\\\"}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.867678 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.888734 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.900600 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-11-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1784282a-268d-4e44-a766-43281414e2dc\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-11-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.912430 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/installer-7-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57cce81-8ea0-4c4d-aae1-ee024d201c15\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler\"/\"installer-7-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.927981 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d29937-debd-4407-b2b1-d1053cb0f342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-776b8b7477-sfpvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.941167 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b73e61-d8d2-4892-8a19-005929c9d4e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b44fa73d3f68543213d024b92ab7ce7fb7d65d0107f504404461c11722595b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a96615dc8e87c2cef2cf079ec058cfe28877ea716ecf26bb099234d80853ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f978dba5e3e5c0aff07b26d0a8059f01e5fd7ca22a8eef0dd99560149ac353d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a004e668c7d85df7568dd1d9ed5860aabc433c48812a25dd28950e163264d75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9638132d1625507bf3df34a9f12230e8d2de16528f88e84fe4b9b664929bfef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:45Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299136b4947012b9172489c064874bf7603c2d89776eb9145340e858fe47c952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://299136b4947012b9172489c064874bf7603c2d89776eb9145340e858fe47c952\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:42Z\\\"}}}],\\\"startTime\\\":\\\"2025-12-03T00:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.956502 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.968762 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.979409 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:59 crc kubenswrapper[3561]: I1203 00:06:59.990902 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a7de23-6134-4044-902a-0900dc04a501\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-kk8kg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.009083 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/installer-9-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad657a4-8b02-4373-8d0d-b0e25345dc90\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver\"/\"installer-9-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.024003 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51936587-a4af-470d-ad92-8ab9062cbc72\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"collect-profiles-29251935-d7x6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.050949 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e8708a-e40d-4d28-846b-c52eda4d1755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-7fc54b8dd7-d2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.066842 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc02677d-deed-4cc9-bb8c-0dd300f83655\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager\"/\"installer-10-retry-1-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.078762 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:09Z\\\"}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.091584 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346fc13eab4a6442e7eb6bb7019dac9a1216274ae59cd519b5e7474a1dd1b4e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:10Z\\\"}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.103205 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.125280 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.138357 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.162013 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-75779c45fd-v2j2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.180305 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T20:04:50Z\\\",\\\"message\\\":\\\"time=\\\\\\\"2025-08-13T20:04:49Z\\\\\\\" level=info msg=\\\\\\\"Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime\\\\\\\"\\\\ntime=\\\\\\\"2025-08-13T20:04:49Z\\\\\\\" level=info msg=\\\\\\\"Go OS/Arch: linux/amd64\\\\\\\"\\\\ntime=\\\\\\\"2025-08-13T20:04:49Z\\\\\\\" level=info msg=\\\\\\\"[metrics] Registering marketplace metrics\\\\\\\"\\\\ntime=\\\\\\\"2025-08-13T20:04:49Z\\\\\\\" level=info msg=\\\\\\\"[metrics] Serving marketplace metrics\\\\\\\"\\\\ntime=\\\\\\\"2025-08-13T20:04:49Z\\\\\\\" level=info msg=\\\\\\\"TLS keys set, using https for metrics\\\\\\\"\\\\ntime=\\\\\\\"2025-08-13T20:04:50Z\\\\\\\" level=warning msg=\\\\\\\"Config API is not available\\\\\\\"\\\\ntime=\\\\\\\"2025-08-13T20:04:50Z\\\\\\\" level=info msg=\\\\\\\"setting up scheme\\\\\\\"\\\\ntime=\\\\\\\"2025-08-13T20:04:50Z\\\\\\\" level=fatal msg=\\\\\\\"failed to determine if *v1.ConfigMap is namespaced: failed to get restmapping: failed to get server groups: Get \\\\\\\\\\\\\\\"https://10.217.4.1:443/api\\\\\\\\\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T20:04:47Z\\\"}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.199196 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.205297 3561 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.222323 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-sdddl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-sdddl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.244001 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc41b00e-72b1-4d82-a286-aa30fbe4095a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a466ae766128033434f8ad5f25e75a88fcb12691227ede54b415d0316e3e1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2f93921dc55acd4e6010ef0f3772ca349a6cb8580c58893b8a87e68a5071fe81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7ef4400edc2adf48253725945c8e70150599ec81c6e14eab0fd8538e1e6f99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:44Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1f3eb2af90dd4ae994c3f81b186fb10a467806cd3706e8edeab9de547eb345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1f3eb2af90dd4ae994c3f81b186fb10a467806cd3706e8edeab9de547eb345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:42Z\\\"}}}],\\\"startTime\\\":\\\"2025-12-03T00:06:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.261572 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.275447 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.291286 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:05Z\\\"}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.305589 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.323329 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T20:07:30Z\\\",\\\"message\\\":\\\" request from succeeding\\\\nW0813 20:07:30.198690 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0813 20:07:30.201950 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Event ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0813 20:07:30.198766 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0813 20:07:30.198484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.ConfigMap ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0813 20:07:30.202220 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0813 20:07:30.199382 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2025-08-13T20:07:30.223Z\\\\tINFO\\\\toperator.init\\\\truntime/asm_amd64.s:1650\\\\tWait completed, proceeding to shutdown the manager\\\\n2025-08-13T20:07:30.228Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T20:05:07Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.341527 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20a713ea366c19c1b427548e8b8ab979d2ae1d350c086fe1a4874181f4de7687\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.356576 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.395668 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T20:05:17Z\\\",\\\"message\\\":\\\"] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nE0813 20:04:36.668906 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nW0813 20:04:50.884304 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nE0813 20:04:50.918193 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nW0813 20:04:52.839119 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nE0813 20:04:52.839544 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nF0813 20:05:17.755149 1 main.go:175] timed out waiting for FeatureGate detection\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T20:04:16Z\\\"}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.430474 3561 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.456996 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c39ec2f009f84a11146853eb53b1073037d39ef91f4d853abf6b613d7e2758e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:12Z\\\"}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.502308 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.541693 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47f4fe3d214f9afb61d4c14f1173afecfd243739000ced3d025f9611dbfd4239\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T20:01:13Z\\\",\\\"message\\\":\\\"\\\\\\\"deployments\\\\\\\", Namespace: \\\\\\\"openshift-console\\\\\\\", Name: \\\\\\\"console\\\\\\\", ...}},\\\\n\\u00a0\\u00a0}\\\\nI0813 20:01:10.648051 1 observer_polling.go:120] Observed file \\\\\\\"/var/run/secrets/serving-cert/tls.crt\\\\\\\" has been modified (old=\\\\\\\"986026bc94c265a214cb3459ff9cc01d5aa0eabbc41959f11d26b6222c432f4b\\\\\\\", new=\\\\\\\"c8d612f3b74dc6507c61e4d04d4ecf5c547ff292af799c7a689fe7a15e5377e0\\\\\\\")\\\\nW0813 20:01:10.679640 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified\\\\nI0813 20:01:10.680909 1 observer_polling.go:120] Observed file \\\\\\\"/var/run/secrets/serving-cert/tls.key\\\\\\\" has been modified (old=\\\\\\\"4b5d87903056afff0f59aa1059503707e0decf9c5ece89d2e759b1a6adbf089a\\\\\\\", new=\\\\\\\"b9e8e76d9d6343210f883954e57c9ccdef1698a4fed96aca367288053d3b1f02\\\\\\\")\\\\nI0813 20:01:10.683590 1 genericapiserver.go:679] \\\\\\\"[graceful-termination] pre-shutdown hooks completed\\\\\\\" name=\\\\\\\"PreShutdownHooksStopped\\\\\\\"\\\\nI0813 20:01:10.683741 1 genericapiserver.go:536] \\\\\\\"[graceful-termination] shutdown event\\\\\\\" name=\\\\\\\"ShutdownInitiated\\\\\\\"\\\\nI0813 20:01:10.684120 1 object_count_tracker.go:151] \\\\\\\"StorageObjectCountTracker pruner is exiting\\\\\\\"\\\\nI0813 20:01:10.684129 1 base_controller.go:172] Shutting down PodDisruptionBudgetController ...\\\\nI0813 20:01:10.684313 1 base_controller.go:172] Shutting down PodDisruptionBudgetController ...\\\\nI0813 20:01:10.684385 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ...\\\\nI0813 20:01:10.684408 1 base_controller.go:172] Shutting down ClusterUpgradeNotificationController ...\\\\nI0813 20:01:10.684468 1 base_controller.go:172] Shutting down ConsoleServiceController ...\\\\nI0813 20:01:10.684509 1 base_controller.go:172] Shutting down ConsoleServiceController ...\\\\nI0813 20:01:10.684517 1 base_controller.go:172] Shutting down InformerWithSwitchController ...\\\\nW0813 20:01:10.684548 1 builder.go:131] graceful termination failed, controllers failed with error: stopped\\\\nI0813 20:01:10.684633 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:22Z\\\"}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.578573 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:44Z\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6341e153ef0c67912e90a0b7692c83762f46b850aa6ae1295493ecd5d38961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:44Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6485be1e489f3639dea8a99d8c28a92ae0a26771b10eb70e93f2e898701f49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6485be1e489f3639dea8a99d8c28a92ae0a26771b10eb70e93f2e898701f49e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:42Z\\\"}}}],\\\"startTime\\\":\\\"2025-12-03T00:06:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.590648 3561 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.646235 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T20:00:06Z\\\"}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.647655 3561 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.663521 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.663575 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.663600 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.663673 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.663714 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.663714 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.663757 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.663757 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.663811 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.663768 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:00 crc kubenswrapper[3561]: E1203 00:07:00.663998 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:00 crc kubenswrapper[3561]: E1203 00:07:00.664106 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:00 crc kubenswrapper[3561]: E1203 00:07:00.664251 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:00 crc kubenswrapper[3561]: E1203 00:07:00.664379 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:00 crc kubenswrapper[3561]: E1203 00:07:00.664519 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:00 crc kubenswrapper[3561]: E1203 00:07:00.664666 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:00 crc kubenswrapper[3561]: E1203 00:07:00.664808 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:00 crc kubenswrapper[3561]: E1203 00:07:00.664989 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:00 crc kubenswrapper[3561]: E1203 00:07:00.665053 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:00 crc kubenswrapper[3561]: E1203 00:07:00.665163 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.677938 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.719578 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/installer-10-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79050916-d488-4806-b556-1b0078b31e53\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager\"/\"installer-10-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.764262 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"663515de-9ac9-4c55-8755-a591a2de3714\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ae381cd61e4203c3136039f3e986bbafe0ddf088d1bfbfc9fa95998167176b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":7,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3170adb4d964bb1b0d4fcefac2050bb117aeab3fbaf35e07671fe5c034d5cf00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7f71563717f644cd0a18392cb754866e3a2feeed434f09fb5b2546616cbfb3ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ad4760bdc75e62c8568812bcaa24c26f3aef7bfab41f02f3d71e575097b33e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}}}],\\\"startTime\\\":\\\"2025-12-03T00:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.807819 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.839589 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:57:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.878647 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.919015 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad171c4b-8408-4370-8e86-502999788ddb\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"collect-profiles-29251950-x8jjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:00 crc kubenswrapper[3561]: I1203 00:07:00.958072 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T20:01:35Z\\\",\\\"message\\\":\\\"73-4e9d-b5ff-47904d2b347f\\\\\\\", APIVersion:\\\\\\\"apps/v1\\\\\\\", ResourceVersion:\\\\\\\"\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-route-controller-manager:\\\\ncause by changes in data.openshift-route-controller-manager.client-ca.configmap\\\\nI0813 20:01:32.709976 1 observer_polling.go:120] Observed file \\\\\\\"/var/run/secrets/serving-cert/tls.crt\\\\\\\" has been modified (old=\\\\\\\"f4b72f648a02bf4d745720b461c43dc88e5b533156c427b7905f426178ca53a1\\\\\\\", new=\\\\\\\"d241a06236d5f1f5f86885717c7d346103e02b5d1ed9dcf4c19f7f338250fbcb\\\\\\\")\\\\nW0813 20:01:32.710474 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified\\\\nI0813 20:01:32.710576 1 observer_polling.go:120] Observed file \\\\\\\"/var/run/secrets/serving-cert/tls.key\\\\\\\" has been modified (old=\\\\\\\"9fa7e5fbef9e286ed42003219ce81736b0a30e8ce2f7dd520c0c149b834fa6a0\\\\\\\", new=\\\\\\\"db6902c5c5fee4f9a52663b228002d42646911159d139a2d4d9110064da348fd\\\\\\\")\\\\nI0813 20:01:32.710987 1 genericapiserver.go:679] \\\\\\\"[graceful-termination] pre-shutdown hooks completed\\\\\\\" name=\\\\\\\"PreShutdownHooksStopped\\\\\\\"\\\\nI0813 20:01:32.711074 1 genericapiserver.go:536] \\\\\\\"[graceful-termination] shutdown event\\\\\\\" name=\\\\\\\"ShutdownInitiated\\\\\\\"\\\\nI0813 20:01:32.711163 1 object_count_tracker.go:151] \\\\\\\"StorageObjectCountTracker pruner is exiting\\\\\\\"\\\\nI0813 20:01:32.711622 1 base_controller.go:172] Shutting down StatusSyncer_openshift-controller-manager ...\\\\nI0813 20:01:32.711623 1 base_controller.go:172] Shutting down OpenshiftControllerManagerStaticResources ...\\\\nI0813 20:01:32.711872 1 operator.go:151] Shutting down OpenShiftControllerManagerOperator\\\\nI0813 20:01:32.711949 1 base_controller.go:172] Shutting down ResourceSyncController ...\\\\nI0813 20:01:32.711995 1 base_controller.go:172] Shutting down ConfigObserver ...\\\\nI0813 20:01:32.712115 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\\\\nW0813 20:01:32.712173 1 builder.go:131] graceful termination failed, controllers failed with error: stopped\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:04Z\\\"}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.032744 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.045149 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0453d24-e872-43af-9e7a-86227c26d200\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-9-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.050123 3561 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.050215 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.076867 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.121945 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.156908 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"collect-profiles-29251920-wcws2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.184525 3561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.200456 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:01:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.215213 3561 reconstruct_new.go:210] "DevicePaths of reconstructed volumes updated" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.237691 3561 kubelet_node_status.go:100] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.279529 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T20:05:09Z\\\",\\\"message\\\":\\\"ck openshift-cluster-machine-approver/cluster-machine-approver-leader: Get \\\\\\\"https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader\\\\\\\": dial tcp 10.217.4.1:443: i/o timeout\\\\nE0813 20:04:17.937199 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get \\\\\\\"https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader\\\\\\\": dial tcp 10.217.4.1:443: i/o timeout\\\\nI0813 20:04:38.936003 1 leaderelection.go:285] failed to renew lease openshift-cluster-machine-approver/cluster-machine-approver-leader: timed out waiting for the condition\\\\nE0813 20:05:08.957257 1 leaderelection.go:308] Failed to release lock: Put \\\\\\\"https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader\\\\\\\": dial tcp 10.217.4.1:443: i/o timeout\\\\nF0813 20:05:08.990431 1 main.go:235] unable to run the manager: leader election lost\\\\nI0813 20:05:09.028498 1 internal.go:516] \\\\\\\"Stopping and waiting for non leader election runnables\\\\\\\"\\\\nI0813 20:05:09.028591 1 internal.go:520] \\\\\\\"Stopping and waiting for leader election runnables\\\\\\\"\\\\nI0813 20:05:09.028608 1 internal.go:526] \\\\\\\"Stopping and waiting for caches\\\\\\\"\\\\nI0813 20:05:09.028585 1 recorder.go:104] \\\\\\\"crc_998ad275-6fd6-49e7-a1d3-0d4cd7031028 stopped leading\\\\\\\" logger=\\\\\\\"events\\\\\\\" type=\\\\\\\"Normal\\\\\\\" object={\\\\\\\"kind\\\\\\\":\\\\\\\"Lease\\\\\\\",\\\\\\\"namespace\\\\\\\":\\\\\\\"openshift-cluster-machine-approver\\\\\\\",\\\\\\\"name\\\\\\\":\\\\\\\"cluster-machine-approver-leader\\\\\\\",\\\\\\\"uid\\\\\\\":\\\\\\\"396b5b52-acf2-4d11-8e98-69ecff2f52d0\\\\\\\",\\\\\\\"apiVersion\\\\\\\":\\\\\\\"coordination.k8s.io/v1\\\\\\\",\\\\\\\"resourceVersion\\\\\\\":\\\\\\\"30699\\\\\\\"} reason=\\\\\\\"LeaderElection\\\\\\\"\\\\nI0813 20:05:09.028819 1 internal.go:530] \\\\\\\"Stopping and waiting for webhooks\\\\\\\"\\\\nI0813 20:05:09.028849 1 internal.go:533] \\\\\\\"Stopping and waiting for HTTP servers\\\\\\\"\\\\nI0813 20:05:09.028884 1 internal.go:537] \\\\\\\"Wait completed, proceeding to shutdown the manager\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.315317 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-10-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f155735-a9be-4621-a5f2-5ab4b6957acd\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-10-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.316830 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.316887 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.316956 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.316999 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.317040 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.317085 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6z2n9\" (UniqueName: \"kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.317130 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.317170 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.317228 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.317267 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.317335 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.317374 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.317425 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.317464 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.317531 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.317593 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.317674 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.317746 3561 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.317757 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.317791 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.817775831 +0000 UTC m=+20.598210089 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.317834 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.317915 3561 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.318015 3561 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.318079 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.81805625 +0000 UTC m=+20.598490538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.318085 3561 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.318165 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.818144723 +0000 UTC m=+20.598579011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.318035 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.318298 3561 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.318356 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.818346899 +0000 UTC m=+20.598781157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318396 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318416 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318437 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318458 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318480 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318498 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318517 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318564 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318582 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318601 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318630 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318657 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318675 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318694 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318713 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318731 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318749 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318766 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318786 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318804 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318825 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-zjg2w\" (UniqueName: \"kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318845 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318864 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318882 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318902 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318924 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318946 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9x6dp\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.318947 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.318965 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.318983 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.818961838 +0000 UTC m=+20.599396106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.319034 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.819010989 +0000 UTC m=+20.599445257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.319048 3561 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.319194 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.319049 3561 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.319309 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.319056 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.319345 3561 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.319371 3561 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.319321 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.319409 3561 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.319430 3561 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.319434 3561 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.319592 3561 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.319814 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.319821 3561 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.319881 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.819077681 +0000 UTC m=+20.599512039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.319922 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.819903216 +0000 UTC m=+20.600337484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.319950 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.819936667 +0000 UTC m=+20.600371045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.319987 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.320135 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.320150 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.820127102 +0000 UTC m=+20.600561370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.320174 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.820165174 +0000 UTC m=+20.600599452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.320193 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.820183814 +0000 UTC m=+20.600618092 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.320208 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.820201005 +0000 UTC m=+20.600635273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.320225 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.820215695 +0000 UTC m=+20.600649973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.320242 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.820234376 +0000 UTC m=+20.600668644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.320257 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.820249616 +0000 UTC m=+20.600683894 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.320272 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.820264547 +0000 UTC m=+20.600698815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.320287 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.820279227 +0000 UTC m=+20.600713495 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.320318 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.320350 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.320385 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.320414 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.320442 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.320473 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.320471 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.320499 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.320519 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.820506034 +0000 UTC m=+20.600940312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.320577 3561 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.320580 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-c2f8t\" (UniqueName: \"kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.320633 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.320702 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.320717 3561 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.320704 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.82068855 +0000 UTC m=+20.601122938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.320764 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bwbqm\" (UniqueName: \"kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.320785 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.820772722 +0000 UTC m=+20.601206990 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.320801 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.820793373 +0000 UTC m=+20.601227641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.320802 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.320886 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.320927 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.320954 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.320981 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.321006 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.321030 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.321056 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vtgqn\" (UniqueName: \"kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.321071 3561 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.321081 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.321108 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.821097292 +0000 UTC m=+20.601531560 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.321937 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.322003 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.8219868 +0000 UTC m=+20.602421078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.322077 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.322133 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.822108123 +0000 UTC m=+20.602542381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.322327 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.322613 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.321106 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327100 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327149 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.327164 3561 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.327189 3561 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.327231 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.827215128 +0000 UTC m=+20.607649386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.327255 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.827240439 +0000 UTC m=+20.607674707 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327193 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.327278 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327319 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327354 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.327370 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.827345572 +0000 UTC m=+20.607779840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327411 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327459 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327498 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327554 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327586 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327627 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.327643 3561 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327662 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.327697 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.827674912 +0000 UTC m=+20.608109180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327726 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.327740 3561 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327769 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.327781 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.827768695 +0000 UTC m=+20.608202963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327805 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.327834 3561 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327844 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327879 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327914 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327953 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.328009 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.328027 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.328100 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.828073683 +0000 UTC m=+20.608507981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.327959 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.328173 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.328222 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.328275 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.328325 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.328371 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.328390 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.828372672 +0000 UTC m=+20.608806950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.328430 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.328474 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.328514 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.328566 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.328602 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.328638 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.328672 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.328711 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.328719 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.328742 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.328850 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.328958 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.328963 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.828769845 +0000 UTC m=+20.609204113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.328943 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.329005 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.828993581 +0000 UTC m=+20.609427839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.329036 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.329142 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.329269 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.329312 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.329328 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.829315691 +0000 UTC m=+20.609749949 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.329368 3561 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.329381 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.829355042 +0000 UTC m=+20.609789340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.329420 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.829407204 +0000 UTC m=+20.609841462 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.329044 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.329497 3561 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.329457 3561 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.329275 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.329579 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.829565559 +0000 UTC m=+20.609999817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.329615 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.82960048 +0000 UTC m=+20.610034748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.329709 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.329747 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.329749 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.329762 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.329886 3561 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.330292 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.830275741 +0000 UTC m=+20.610710019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.330318 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.330454 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.330778 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.330829 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.330900 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.330973 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.331044 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.331159 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.331198 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.331228 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.331274 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.331267 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.831227409 +0000 UTC m=+20.611661667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.331354 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.331388 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.331439 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.331450 3561 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.331474 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.331500 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.831489837 +0000 UTC m=+20.611924095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.331555 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.331567 3561 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.331592 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.331620 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.831608321 +0000 UTC m=+20.612042579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.331661 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.331720 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.331810 3561 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.331856 3561 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.331908 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.831894279 +0000 UTC m=+20.612328537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.331950 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.332021 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.832004013 +0000 UTC m=+20.612438281 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.332077 3561 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.332109 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.832099406 +0000 UTC m=+20.612533674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.332379 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.332415 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.332449 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.332484 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.332524 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j4qn7\" (UniqueName: \"kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.332582 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.332656 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.332697 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7jw8\" (UniqueName: \"kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.332729 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.332767 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.332812 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.332848 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4sfhc\" (UniqueName: \"kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.332885 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.332983 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.333015 3561 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.333050 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.833041255 +0000 UTC m=+20.613475513 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.333067 3561 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.333096 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.333115 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.833093606 +0000 UTC m=+20.613527874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.333226 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.333231 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.333270 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.833260781 +0000 UTC m=+20.613695039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.333349 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.333402 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.833391935 +0000 UTC m=+20.613826193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.333471 3561 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.333506 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.833493317 +0000 UTC m=+20.613927575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.333905 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.333981 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.833963232 +0000 UTC m=+20.614397510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.334191 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.334253 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.334254 3561 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.334310 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.334351 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.834323653 +0000 UTC m=+20.614757921 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.334385 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.334431 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.334518 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.334628 3561 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.334670 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.339003 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.834646253 +0000 UTC m=+20.615080521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.339056 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.839044516 +0000 UTC m=+20.619478784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339340 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339398 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339424 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339453 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339475 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.339473 3561 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339496 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339523 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.339556 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.83952737 +0000 UTC m=+20.619961738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339587 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339621 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.339623 3561 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339644 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339669 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339692 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339715 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339737 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.339763 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.839747517 +0000 UTC m=+20.620181775 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339829 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339864 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339887 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339909 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.339926 3561 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.339932 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.340124 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.340168 3561 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.340209 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.840195131 +0000 UTC m=+20.620629389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.340280 3561 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.340308 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.840301374 +0000 UTC m=+20.620735632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.340337 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.840332125 +0000 UTC m=+20.620766373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.340361 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.340374 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.340379 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.840374196 +0000 UTC m=+20.620808444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.340385 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.340611 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.340675 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.340856 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.341121 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.341465 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.341480 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.342322 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.342354 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.342519 3561 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.342578 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.842567713 +0000 UTC m=+20.623001961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.342867 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.342916 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.342941 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.342967 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.342994 3561 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.343007 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.343029 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.343044 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.843030267 +0000 UTC m=+20.623464535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.343075 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-gsxd9\" (UniqueName: \"kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.343113 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.343144 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.343175 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-xkzjk\" (UniqueName: \"kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.343205 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.343235 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.343262 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.343277 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.343292 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bwvjb\" (UniqueName: \"kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.343324 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.343354 3561 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.343402 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.843386908 +0000 UTC m=+20.623821166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.343438 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.343466 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.343487 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.343632 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.343666 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.843656176 +0000 UTC m=+20.624090434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.343769 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.343812 3561 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.343833 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.843826731 +0000 UTC m=+20.624260989 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.343886 3561 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.343911 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.843905514 +0000 UTC m=+20.624339782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.344484 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.344561 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.344599 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.344634 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.344664 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.344697 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.344741 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8svnk\" (UniqueName: \"kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.344775 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.344804 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.344843 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.844831781 +0000 UTC m=+20.625266029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.344906 3561 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.344930 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.844924204 +0000 UTC m=+20.625358462 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.344954 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.344970 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.345113 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.345156 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.345505 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.345679 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.345683 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.345726 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.345731 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.845719658 +0000 UTC m=+20.626153926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.345756 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.345761 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.845752169 +0000 UTC m=+20.626186437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.345792 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.345832 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.345894 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.345931 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.345986 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.346021 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.846013787 +0000 UTC m=+20.626448045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.346050 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.346069 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.846063629 +0000 UTC m=+20.626497887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.346095 3561 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.346095 3561 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.346116 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.8461103 +0000 UTC m=+20.626544558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346153 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.346195 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.346220 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.846150312 +0000 UTC m=+20.626584590 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346252 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346286 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.346325 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.846314817 +0000 UTC m=+20.626749075 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346331 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346381 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.346418 3561 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346411 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.346444 3561 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.346455 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.846445211 +0000 UTC m=+20.626879479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346482 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.346484 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.846473592 +0000 UTC m=+20.626907930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346519 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346566 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4qr9t\" (UniqueName: \"kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346608 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346734 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346760 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346762 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346804 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346827 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346851 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346872 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346894 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346962 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.347016 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.347070 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.847054669 +0000 UTC m=+20.627488947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.347079 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.347106 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.847098571 +0000 UTC m=+20.627532829 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.347113 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.347147 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.847137092 +0000 UTC m=+20.627571370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.346987 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347589 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347615 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347641 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347663 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347687 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347711 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v45vm\" (UniqueName: \"kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347733 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347756 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dtjml\" (UniqueName: \"kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347778 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347800 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347821 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347842 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347864 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347887 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347914 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347938 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347957 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347977 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.347998 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.348021 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.348043 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.348066 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.348106 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.348146 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.348175 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.348196 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.348216 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.348241 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.348262 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.348279 3561 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.348327 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.348382 3561 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.348468 3561 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.348511 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.348625 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.348674 3561 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.349205 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.349204 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.349363 3561 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.349407 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.348284 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.349460 3561 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.349498 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.349729 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.349731 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.348328 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.848315807 +0000 UTC m=+20.628750075 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.349807 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.849794522 +0000 UTC m=+20.630228780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.349832 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.849820913 +0000 UTC m=+20.630255171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.349841 3561 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.349888 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.849876055 +0000 UTC m=+20.630310443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.349902 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.849896295 +0000 UTC m=+20.630330543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.349920 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.849911946 +0000 UTC m=+20.630346204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.349942 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.849928526 +0000 UTC m=+20.630362784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.349947 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.349962 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.349971 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.849964378 +0000 UTC m=+20.630398636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.349989 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.849983408 +0000 UTC m=+20.630417666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.350003 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.849996979 +0000 UTC m=+20.630431237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.350026 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.350065 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.350492 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.350641 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.350685 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.350716 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.350734 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.350740 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rkkfv\" (UniqueName: \"kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.350833 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-cx4f9\" (UniqueName: \"kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.350878 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.350890 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.350914 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.350925 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.850916516 +0000 UTC m=+20.631350774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.350956 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.350976 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.850968877 +0000 UTC m=+20.631403135 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.350984 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.351020 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.850985838 +0000 UTC m=+20.631420096 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.351040 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.851031039 +0000 UTC m=+20.631465297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.351051 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.351057 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.85104948 +0000 UTC m=+20.631483738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.351100 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.851089711 +0000 UTC m=+20.631523969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.351126 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.351232 3561 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.351256 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.851249796 +0000 UTC m=+20.631684054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.351279 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.353735 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.355925 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.357884 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.358772 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.362282 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.362310 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.362321 3561 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.362368 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.862356163 +0000 UTC m=+20.642790421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.400848 3561 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.400881 3561 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.400892 3561 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.400950 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.900932062 +0000 UTC m=+20.681366320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.409196 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.413045 3561 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.413090 3561 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.413105 3561 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.413175 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.913153492 +0000 UTC m=+20.693587750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.414871 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.428219 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.432395 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z2n9\" (UniqueName: \"kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.444950 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.452346 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.452441 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.452440 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.452625 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.452673 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.452725 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.452768 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.452786 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.452807 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.452852 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.452878 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.452900 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.452928 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.452951 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.452986 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453026 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453127 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453152 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453172 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453236 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453280 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453301 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453309 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453344 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453349 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453363 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453372 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453421 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453493 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453513 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453519 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453558 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453562 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453599 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453703 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453737 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453759 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453780 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453800 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453836 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453836 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453875 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453888 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453877 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453902 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.453915 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454051 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454127 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454191 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454230 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454314 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454381 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454394 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454425 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454436 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454490 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454509 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454528 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454578 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454612 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454738 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454768 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454791 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454821 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454855 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454889 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454923 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454989 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454998 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.454998 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455036 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455033 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455061 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455082 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455138 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455185 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455208 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455234 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455254 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455276 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455304 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455356 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455370 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455444 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455450 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455506 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455564 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455576 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455592 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455621 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455643 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455750 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455759 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455789 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455805 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.455908 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.459799 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.459820 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.459831 3561 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.459868 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.959858408 +0000 UTC m=+20.740292666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.478740 3561 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.478764 3561 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.478775 3561 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.478815 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:01.978803213 +0000 UTC m=+20.759237481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.504522 3561 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.504595 3561 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.504617 3561 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.504690 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.004669566 +0000 UTC m=+20.785103864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.507099 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.528424 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x6dp\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:01 crc kubenswrapper[3561]: W1203 00:07:01.537733 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e19f9e8_9a37_4ca8_9790_c219750ab482.slice/crio-971bf6fca4ccfe7795ea978c89b627f475353cdad241f283e929bd4958a5aaf7 WatchSource:0}: Error finding container 971bf6fca4ccfe7795ea978c89b627f475353cdad241f283e929bd4958a5aaf7: Status 404 returned error can't find the container with id 971bf6fca4ccfe7795ea978c89b627f475353cdad241f283e929bd4958a5aaf7 Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.545166 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v65wr" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.552316 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjg2w\" (UniqueName: \"kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.563842 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.563872 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.563883 3561 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.563933 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.063915942 +0000 UTC m=+20.844350200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.586877 3561 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.586918 3561 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.586930 3561 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.586982 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.086966091 +0000 UTC m=+20.867400349 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.601240 3561 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.601309 3561 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.601325 3561 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.601414 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.101391918 +0000 UTC m=+20.881826186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.621921 3561 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.621960 3561 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.621975 3561 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.622055 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.122034094 +0000 UTC m=+20.902468352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.644920 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2f8t\" (UniqueName: \"kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.663173 3561 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.663207 3561 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.663226 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.663278 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.163264914 +0000 UTC m=+20.943699172 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.663419 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.663459 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.663499 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.663558 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.663583 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.663600 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.663659 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.663751 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.663795 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.663844 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.663918 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.663989 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.664041 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.664150 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.664205 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.664209 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.664237 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.664247 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.664323 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.664379 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.664503 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.664529 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.664337 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.664578 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.664654 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.664602 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.667967 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.668094 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.668215 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.668312 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.668366 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.668374 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.668603 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.668740 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.668602 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.668647 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.668929 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.668989 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.668949 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.669073 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.668824 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.669158 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.669252 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.668877 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.669503 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.669638 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.669840 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.670098 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.670175 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.670363 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.670519 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.670667 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.670868 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.671060 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.671146 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.671262 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.671431 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.671572 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.671609 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.671764 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.671912 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.672083 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.672146 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.672777 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.672946 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.673494 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.673528 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.673712 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.674026 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.674285 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.674368 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.674502 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.674691 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.674734 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.683038 3561 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.683066 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.683122 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.183104146 +0000 UTC m=+20.963538404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.708121 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwbqm\" (UniqueName: \"kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.726639 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtgqn\" (UniqueName: \"kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.742800 3561 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.742835 3561 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.742846 3561 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.742904 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.242886267 +0000 UTC m=+21.023320525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.770172 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.778068 3561 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.778315 3561 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.778407 3561 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.778470 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.278449846 +0000 UTC m=+21.058884104 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.789942 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" event={"ID":"bf1a8b70-3856-486f-9912-a2de1d57c3fb","Type":"ContainerStarted","Data":"9c9fcbd53fcd64d0477b674cf927423e3ef9f5e337b1494435c4f474ce85b743"} Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.789988 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" event={"ID":"bf1a8b70-3856-486f-9912-a2de1d57c3fb","Type":"ContainerStarted","Data":"49760ad15a1661b6149646e4d1ab46ab7e9f9fc670448f5680a58aadbb5ee036"} Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.798866 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.800945 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q88th" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.801150 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d"} Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.801192 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"971bf6fca4ccfe7795ea978c89b627f475353cdad241f283e929bd4958a5aaf7"} Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.804734 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.809393 3561 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.809424 3561 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.809437 3561 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.809501 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.309481135 +0000 UTC m=+21.089915393 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.822230 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.822262 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.822275 3561 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.822327 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.322311105 +0000 UTC m=+21.102745373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.839510 3561 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.839556 3561 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.839568 3561 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.839626 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.339608989 +0000 UTC m=+21.120043247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: W1203 00:07:01.845291 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dbadf0a_ba02_47d6_96a9_0995c1e8e4a8.slice/crio-f58f86061c87170f55e631a5e07926adf07f6e8e03f6edaa5d6a388c496a37e5 WatchSource:0}: Error finding container f58f86061c87170f55e631a5e07926adf07f6e8e03f6edaa5d6a388c496a37e5: Status 404 returned error can't find the container with id f58f86061c87170f55e631a5e07926adf07f6e8e03f6edaa5d6a388c496a37e5 Dec 03 00:07:01 crc kubenswrapper[3561]: W1203 00:07:01.845975 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51a02bbf_2d40_4f84_868a_d399ea18a846.slice/crio-68673c5b9544fdcdf104ce267533e4bd67522b08abb5ff0f8063f8513b36a152 WatchSource:0}: Error finding container 68673c5b9544fdcdf104ce267533e4bd67522b08abb5ff0f8063f8513b36a152: Status 404 returned error can't find the container with id 68673c5b9544fdcdf104ce267533e4bd67522b08abb5ff0f8063f8513b36a152 Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.863403 3561 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.863439 3561 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.863454 3561 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.863531 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.363510133 +0000 UTC m=+21.143944391 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.864882 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.864920 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.864954 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.864982 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.865005 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.865037 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865058 3561 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865244 3561 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.865062 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865166 3561 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865185 3561 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865192 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865337 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865389 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865302 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.865283498 +0000 UTC m=+21.645717826 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865485 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.865457282 +0000 UTC m=+21.645891580 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865669 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.865649158 +0000 UTC m=+21.646083426 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.865680 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865703 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.865690669 +0000 UTC m=+21.646124947 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865723 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.86571396 +0000 UTC m=+21.646148228 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865745 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.865737441 +0000 UTC m=+21.646171709 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865784 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.865775332 +0000 UTC m=+21.646209600 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865802 3561 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865851 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.865837244 +0000 UTC m=+21.646271532 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.865855 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865940 3561 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.865954 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.865978 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.865968898 +0000 UTC m=+21.646403166 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.866010 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866035 3561 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.866049 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866079 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.866065431 +0000 UTC m=+21.646499719 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866101 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.866151 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866173 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.866201 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866215 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866230 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.866201105 +0000 UTC m=+21.646635363 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866253 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.866239876 +0000 UTC m=+21.646674154 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866267 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.866328 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.866383 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866404 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.866388621 +0000 UTC m=+21.646822909 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866431 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.866420042 +0000 UTC m=+21.646854330 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866470 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.866479 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866501 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.866492814 +0000 UTC m=+21.646927082 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866435 3561 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.866531 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866569 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.866552636 +0000 UTC m=+21.646986894 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866585 3561 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866591 3561 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866628 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.866593 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.866646 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.866637758 +0000 UTC m=+21.647072016 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.867227 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.867216626 +0000 UTC m=+21.647650884 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.867247 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.867240687 +0000 UTC m=+21.647674945 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.867291 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.867349 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.867374 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.867392 3561 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.867435 3561 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.867442 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.867430123 +0000 UTC m=+21.647864381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.867460 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.867454163 +0000 UTC m=+21.647888421 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.867397 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.867476 3561 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.867493 3561 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.867561 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.867585 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.867575997 +0000 UTC m=+21.648010345 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.867566 3561 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.867607 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.867599768 +0000 UTC m=+21.648034146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.867645 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.867629429 +0000 UTC m=+21.648063717 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.867698 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.867741 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.867762 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.867797 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.867812 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.867799264 +0000 UTC m=+21.648233532 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.867855 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.867889 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.867954 3561 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.867971 3561 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.867988 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.867961 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868011 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.86799683 +0000 UTC m=+21.648431128 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868037 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.868024981 +0000 UTC m=+21.648459269 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868041 3561 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868064 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868078 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.868068482 +0000 UTC m=+21.648502840 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868117 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.868085573 +0000 UTC m=+21.648519841 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.868158 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868160 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.868147084 +0000 UTC m=+21.648581332 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.868202 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.868242 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868278 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.868305 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868332 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.868322329 +0000 UTC m=+21.648756587 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868347 3561 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868360 3561 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868369 3561 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868383 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.86837337 +0000 UTC m=+21.648807618 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.868401 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868405 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.868395951 +0000 UTC m=+21.648830229 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868425 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.868417422 +0000 UTC m=+21.648851700 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.868452 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868480 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.868489 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.868528 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868572 3561 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868573 3561 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868595 3561 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868578 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.868563076 +0000 UTC m=+21.648997444 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.868650 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868694 3561 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868723 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.868715371 +0000 UTC m=+21.649149629 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868753 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.868745402 +0000 UTC m=+21.649179660 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868770 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.868764162 +0000 UTC m=+21.649198420 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868786 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.868780423 +0000 UTC m=+21.649214681 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.868828 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.868892 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868917 3561 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.868933 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868948 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.868937638 +0000 UTC m=+21.649371916 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.868971 3561 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.868995 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869016 3561 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869027 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.86901968 +0000 UTC m=+21.649453928 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.869048 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869051 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.869040391 +0000 UTC m=+21.649474739 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869055 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869087 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.869078492 +0000 UTC m=+21.649512760 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.869113 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869120 3561 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.869145 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869153 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.869144954 +0000 UTC m=+21.649579302 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.869186 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869225 3561 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.869257 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869267 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.869256128 +0000 UTC m=+21.649690476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869267 3561 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869304 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869308 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.869298489 +0000 UTC m=+21.649732747 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.869357 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.869391 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869398 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.869384711 +0000 UTC m=+21.649819039 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869400 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.869431 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869435 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869442 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.869432543 +0000 UTC m=+21.649866881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869473 3561 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.869490 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869509 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.869500075 +0000 UTC m=+21.649934333 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869533 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869588 3561 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.869553 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869562 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.869552257 +0000 UTC m=+21.649986515 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869641 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.869632489 +0000 UTC m=+21.650066757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869659 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.86965176 +0000 UTC m=+21.650086028 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869674 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.86966738 +0000 UTC m=+21.650101658 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.869698 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.869731 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.869765 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869796 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.869811 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869820 3561 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869843 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.869820755 +0000 UTC m=+21.650255023 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869855 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869863 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.869852986 +0000 UTC m=+21.650287324 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869928 3561 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.869964 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.869954499 +0000 UTC m=+21.650388757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.869994 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.870027 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870059 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870075 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870094 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.870083983 +0000 UTC m=+21.650518241 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870111 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.870103324 +0000 UTC m=+21.650537692 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870125 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.870118184 +0000 UTC m=+21.650552442 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.870211 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.870234 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.870255 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.870276 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.870305 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.870326 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.870346 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.870367 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.870398 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.870431 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870455 3561 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870472 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870492 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.870482075 +0000 UTC m=+21.650916353 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870512 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.870504786 +0000 UTC m=+21.650939054 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870513 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870517 3561 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870551 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870557 3561 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870572 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.870557298 +0000 UTC m=+21.650991626 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870495 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870578 3561 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870557 3561 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870598 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.870462 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870594 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.870586068 +0000 UTC m=+21.651020326 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.870642 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870654 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.870669 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870680 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.870671691 +0000 UTC m=+21.651105949 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870725 3561 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870733 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870759 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.870750113 +0000 UTC m=+21.651184381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870775 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.870767564 +0000 UTC m=+21.651201832 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870791 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.870783664 +0000 UTC m=+21.651217932 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870809 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.870800365 +0000 UTC m=+21.651234643 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870825 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.870816415 +0000 UTC m=+21.651250683 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870841 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.870833276 +0000 UTC m=+21.651267554 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.870857 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.870849856 +0000 UTC m=+21.651284134 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.870900 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.870935 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.870996 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.871025 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.871056 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.871127 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.871157 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.871199 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.871246 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.871276 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.871320 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.871363 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.871404 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.871433 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.871476 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.871505 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.871565 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.871597 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.871620 3561 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.871666 3561 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.871681 3561 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.871696 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.871687361 +0000 UTC m=+21.652121609 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.871710 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.871704012 +0000 UTC m=+21.652138270 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.871728 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.871748 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.871718972 +0000 UTC m=+21.652153220 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.871767 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.871757873 +0000 UTC m=+21.652192251 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.871838 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.871873 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.871863327 +0000 UTC m=+21.652297675 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.871934 3561 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.871959 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.871952589 +0000 UTC m=+21.652386847 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872032 3561 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872076 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.872064653 +0000 UTC m=+21.652499021 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872106 3561 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872113 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872142 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.872134545 +0000 UTC m=+21.652568933 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872157 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.872149635 +0000 UTC m=+21.652584013 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872178 3561 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872209 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.872196577 +0000 UTC m=+21.652630855 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872224 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872241 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872245 3561 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872251 3561 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872274 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.872263439 +0000 UTC m=+21.652697707 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872295 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.872283779 +0000 UTC m=+21.652718157 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872346 3561 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872393 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.872384102 +0000 UTC m=+21.652818370 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872432 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872456 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.872449104 +0000 UTC m=+21.652883382 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872488 3561 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872512 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.872505216 +0000 UTC m=+21.652939494 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872533 3561 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872568 3561 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872586 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.872578308 +0000 UTC m=+21.653012566 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872599 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.872593199 +0000 UTC m=+21.653027457 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.871639 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872619 3561 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.872638 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872643 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.87263533 +0000 UTC m=+21.653069608 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.872666 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872683 3561 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872707 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.872699942 +0000 UTC m=+21.653134210 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.872735 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.872768 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.872800 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.872889 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872901 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.872923 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872924 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.872918049 +0000 UTC m=+21.653352297 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872953 3561 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.872968 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.872972 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.87296675 +0000 UTC m=+21.653401008 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.873043 3561 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.873071 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.873062203 +0000 UTC m=+21.653496471 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.873110 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.873135 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.873128035 +0000 UTC m=+21.653562303 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.873176 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.873202 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.873194867 +0000 UTC m=+21.653629135 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.873205 3561 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.873233 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.873234 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.873227018 +0000 UTC m=+21.653661296 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.873263 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.873256219 +0000 UTC m=+21.653690467 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.873263 3561 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.873287 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.87328224 +0000 UTC m=+21.653716498 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.873296 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.873324 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.873315491 +0000 UTC m=+21.653749759 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.882247 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.882289 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.882309 3561 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.882392 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.382369675 +0000 UTC m=+21.162803973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.905396 3561 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.905439 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.905504 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.405486456 +0000 UTC m=+21.185920714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.936741 3561 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.936774 3561 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.936786 3561 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.936847 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.436830206 +0000 UTC m=+21.217264464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.952703 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.958701 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/installer-11-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a45bfab9-f78b-4d72-b5b7-903e60401124\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager\"/\"installer-11-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.975146 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.977780 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:01 crc kubenswrapper[3561]: I1203 00:07:01.977818 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.977980 3561 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.977997 3561 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.978009 3561 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.978062 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.978046165 +0000 UTC m=+21.758480433 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.978131 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.978150 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.978160 3561 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.978196 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.978183669 +0000 UTC m=+21.758617937 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.978262 3561 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.978276 3561 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.978285 3561 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.978311 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.978303073 +0000 UTC m=+21.758737341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.982932 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.982969 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.982983 3561 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:01 crc kubenswrapper[3561]: E1203 00:07:01.983059 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.483037296 +0000 UTC m=+21.263471564 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.022940 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4qn7\" (UniqueName: \"kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.032822 3561 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.032872 3561 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.032886 3561 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.032963 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.532938189 +0000 UTC m=+21.313372447 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.051279 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7jw8\" (UniqueName: \"kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.059922 3561 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.059959 3561 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.059974 3561 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.060035 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.56001418 +0000 UTC m=+21.340448448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.080896 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.081014 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.081068 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.081224 3561 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.081252 3561 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.081265 3561 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.081307 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.081294015 +0000 UTC m=+21.861728273 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.081336 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.081376 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.081392 3561 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.081467 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.081445789 +0000 UTC m=+21.861880127 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.081513 3561 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.081594 3561 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.081611 3561 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.081701 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.081681596 +0000 UTC m=+21.862115874 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.094951 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sfhc\" (UniqueName: \"kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.103321 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.103345 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.103381 3561 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.103465 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.603423995 +0000 UTC m=+21.383858273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.192887 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.193485 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.194797 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l92hr" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.194902 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.196675 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.196911 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.197126 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.197185 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.202732 3561 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.202758 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.202810 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.202797368 +0000 UTC m=+21.983231626 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.202904 3561 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.202918 3561 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.202932 3561 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.202957 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.202949803 +0000 UTC m=+21.983384061 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.203011 3561 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.203049 3561 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.203074 3561 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.203122 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.203114558 +0000 UTC m=+21.983548816 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.203207 3561 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.203219 3561 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.203226 3561 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.203266 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.203258642 +0000 UTC m=+21.983692900 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.203333 3561 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.203344 3561 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.203352 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.203376 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.203369185 +0000 UTC m=+21.983803443 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.209205 3561 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.209236 3561 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.209249 3561 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.209311 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.709291745 +0000 UTC m=+21.489726003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.209490 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.209509 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.209519 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.209588 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.709577094 +0000 UTC m=+21.490011352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.211365 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.211394 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.211405 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.211453 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.71143502 +0000 UTC m=+21.491869278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.216795 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.216830 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.216850 3561 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.216918 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.716899505 +0000 UTC m=+21.497333763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.229184 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:02 crc kubenswrapper[3561]: W1203 00:07:02.241711 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b6d14a5_ca00_40c7_af7a_051a98a24eed.slice/crio-f5448bbd399a804f0d337dc80e2934537caad084410619749d505338341e4c6b WatchSource:0}: Error finding container f5448bbd399a804f0d337dc80e2934537caad084410619749d505338341e4c6b: Status 404 returned error can't find the container with id f5448bbd399a804f0d337dc80e2934537caad084410619749d505338341e4c6b Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.242666 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkzjk\" (UniqueName: \"kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.252015 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsxd9\" (UniqueName: \"kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.297401 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:02 crc kubenswrapper[3561]: W1203 00:07:02.306382 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8175ef1_0983_4bfe_a64e_fc6f5c5f7d2e.slice/crio-e0c4dc6071f9a69765e2181d8a7550a1708ac3cbcff77718b7f82fa227f5e219 WatchSource:0}: Error finding container e0c4dc6071f9a69765e2181d8a7550a1708ac3cbcff77718b7f82fa227f5e219: Status 404 returned error can't find the container with id e0c4dc6071f9a69765e2181d8a7550a1708ac3cbcff77718b7f82fa227f5e219 Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.306871 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.306957 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.307111 3561 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.307135 3561 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.307147 3561 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.307187 3561 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.307211 3561 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.307225 3561 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.307273 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.307258395 +0000 UTC m=+22.087692653 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.307653 3561 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.307674 3561 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.307683 3561 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.308382 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.307329227 +0000 UTC m=+22.087763485 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.308421 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.808405179 +0000 UTC m=+21.588839437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.310857 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwvjb\" (UniqueName: \"kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.325433 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.325460 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.325471 3561 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.325529 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.825511477 +0000 UTC m=+21.605945735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.347342 3561 request.go:697] Waited for 1.001431505s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.348966 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8svnk\" (UniqueName: \"kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.357208 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.359257 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.359286 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.359297 3561 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.359358 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.859340163 +0000 UTC m=+21.639774421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.399611 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qr9t\" (UniqueName: \"kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.399909 3561 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.399930 3561 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.399942 3561 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.399998 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.899983215 +0000 UTC m=+21.680417473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.414426 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.414494 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.414506 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dn27q" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.414554 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414604 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414638 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.414648 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414651 3561 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.414681 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414723 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.414701911 +0000 UTC m=+22.195136209 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414734 3561 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414760 3561 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414769 3561 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414775 3561 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414783 3561 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414792 3561 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.414816 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414827 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.414809224 +0000 UTC m=+22.195243542 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414858 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.414848465 +0000 UTC m=+22.195282783 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414858 3561 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414873 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414891 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414906 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414916 3561 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.414954 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.414938688 +0000 UTC m=+22.195373036 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.415003 3561 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.415023 3561 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.415034 3561 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.415072 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.415053942 +0000 UTC m=+22.195488210 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.415400 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.415385272 +0000 UTC m=+22.195819540 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.423535 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.457372 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:07:02 crc kubenswrapper[3561]: W1203 00:07:02.460918 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a23c0ee_5648_448c_b772_83dced2891ce.slice/crio-8135e46c9e125bca8da3f36c0e884781e2cbd1b9d814612a71aa2fc1561e032a WatchSource:0}: Error finding container 8135e46c9e125bca8da3f36c0e884781e2cbd1b9d814612a71aa2fc1561e032a: Status 404 returned error can't find the container with id 8135e46c9e125bca8da3f36c0e884781e2cbd1b9d814612a71aa2fc1561e032a Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.462826 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.473220 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.473264 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.473276 3561 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.473348 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.973321197 +0000 UTC m=+21.753755455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.477614 3561 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.477653 3561 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.477668 3561 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.477751 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.977726242 +0000 UTC m=+21.758160500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.478315 3561 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.478346 3561 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.478358 3561 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.478418 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:02.978399342 +0000 UTC m=+21.758833590 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.500203 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.500232 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.500246 3561 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.500312 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.000294646 +0000 UTC m=+21.780728904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: W1203 00:07:02.504310 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d0dcce3_d96e_48cb_9b9f_362105911589.slice/crio-42d0675b7dd1526892d29164762ea49f1824366bb0b408770830a509fea8a3a1 WatchSource:0}: Error finding container 42d0675b7dd1526892d29164762ea49f1824366bb0b408770830a509fea8a3a1: Status 404 returned error can't find the container with id 42d0675b7dd1526892d29164762ea49f1824366bb0b408770830a509fea8a3a1 Dec 03 00:07:02 crc kubenswrapper[3561]: W1203 00:07:02.506159 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec1bae8b_3200_4ad9_b33b_cf8701f3027c.slice/crio-ae98f05191e50487be2835ee2e18839f361c7fa4429a532bca2c7c8401db4dcd WatchSource:0}: Error finding container ae98f05191e50487be2835ee2e18839f361c7fa4429a532bca2c7c8401db4dcd: Status 404 returned error can't find the container with id ae98f05191e50487be2835ee2e18839f361c7fa4429a532bca2c7c8401db4dcd Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.516550 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.516726 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.516746 3561 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.516771 3561 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.516782 3561 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.516910 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.516891499 +0000 UTC m=+22.297325757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.516942 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.516968 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.516982 3561 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.517034 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.517014362 +0000 UTC m=+22.297448620 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.525840 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-v45vm\" (UniqueName: \"kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.541821 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtjml\" (UniqueName: \"kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.584081 3561 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.584122 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.584396 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.084374114 +0000 UTC m=+21.864808392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.617012 3561 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.617043 3561 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.617054 3561 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.617109 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.117091156 +0000 UTC m=+21.897525414 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.621160 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.621229 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.621259 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.621570 3561 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.621589 3561 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.621599 3561 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.621706 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.621691905 +0000 UTC m=+22.402126163 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.621760 3561 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.621770 3561 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.621777 3561 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.621798 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.621791868 +0000 UTC m=+22.402226126 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.621833 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.621842 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.621848 3561 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.621867 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.62186106 +0000 UTC m=+22.402295318 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.625080 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkkfv\" (UniqueName: \"kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.649283 3561 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.649326 3561 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.649339 3561 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.649432 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.149410435 +0000 UTC m=+21.929844763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.664276 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.664471 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.664727 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.664834 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.664879 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.664967 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.665002 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.665071 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.665105 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.665172 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.665204 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.665274 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.665382 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.665453 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.665491 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.665602 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.665648 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.665724 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.665760 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.665833 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.699646 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx4f9\" (UniqueName: \"kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.713697 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.718399 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T20:03:10Z\\\",\\\"message\\\":\\\" openshift-network-node-identity/ovnkube-identity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity\\\\\\\": dial tcp 192.168.130.11:6443: connect: connection refused\\\\nI0813 20:03:00.839743 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get \\\\\\\"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true\\\\u0026resourceVersion=30560\\\\u0026timeoutSeconds=591\\\\u0026watch=true\\\\\\\": dial tcp 192.168.130.11:6443: connect: connection refused - backing off\\\\nI0813 20:03:10.047083 1 leaderelection.go:285] failed to renew lease openshift-network-node-identity/ovnkube-identity: timed out waiting for the condition\\\\nE0813 20:03:10.050206 1 leaderelection.go:308] Failed to release lock: Put \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity\\\\\\\": dial tcp 192.168.130.11:6443: connect: connection refused\\\\nI0813 20:03:10.050704 1 recorder.go:104] \\\\\\\"crc_9a6fd3ed-e0b7-4ff5-b6f5-4bc33b4b2a02 stopped leading\\\\\\\" logger=\\\\\\\"events\\\\\\\" type=\\\\\\\"Normal\\\\\\\" object={\\\\\\\"kind\\\\\\\":\\\\\\\"Lease\\\\\\\",\\\\\\\"namespace\\\\\\\":\\\\\\\"openshift-network-node-identity\\\\\\\",\\\\\\\"name\\\\\\\":\\\\\\\"ovnkube-identity\\\\\\\",\\\\\\\"uid\\\\\\\":\\\\\\\"affbead6-e1b0-4053-844d-1baff2e26ac5\\\\\\\",\\\\\\\"apiVersion\\\\\\\":\\\\\\\"coordination.k8s.io/v1\\\\\\\",\\\\\\\"resourceVersion\\\\\\\":\\\\\\\"30647\\\\\\\"} reason=\\\\\\\"LeaderElection\\\\\\\"\\\\nI0813 20:03:10.051306 1 internal.go:516] \\\\\\\"Stopping and waiting for non leader election runnables\\\\\\\"\\\\nI0813 20:03:10.051417 1 internal.go:520] \\\\\\\"Stopping and waiting for leader election runnables\\\\\\\"\\\\nI0813 20:03:10.051459 1 internal.go:526] \\\\\\\"Stopping and waiting for caches\\\\\\\"\\\\nI0813 20:03:10.051469 1 internal.go:530] \\\\\\\"Stopping and waiting for webhooks\\\\\\\"\\\\nI0813 20:03:10.051476 1 internal.go:533] \\\\\\\"Stopping and waiting for HTTP servers\\\\\\\"\\\\nI0813 20:03:10.051484 1 internal.go:537] \\\\\\\"Wait completed, proceeding to shutdown the manager\\\\\\\"\\\\nerror running approver: leader election lost\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.727817 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.727861 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.727925 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.727955 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.728255 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.728269 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.728278 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.728324 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.728310577 +0000 UTC m=+22.508744835 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.728369 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.728378 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.728384 3561 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.728403 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.728397489 +0000 UTC m=+22.508831747 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.728451 3561 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.728461 3561 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.728467 3561 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.728489 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.728483662 +0000 UTC m=+22.508917920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.728526 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.728549 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.728559 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.728578 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.728573075 +0000 UTC m=+22.509007333 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.731308 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.783707 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-644bb77b49-5x5xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-644bb77b49-5x5xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.796769 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/installer-12-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3557248c-8f70-4165-aa66-8df983e7e01a\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver\"/\"installer-12-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.816530 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"ae98f05191e50487be2835ee2e18839f361c7fa4429a532bca2c7c8401db4dcd"} Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.817355 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"42d0675b7dd1526892d29164762ea49f1824366bb0b408770830a509fea8a3a1"} Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.818100 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" event={"ID":"2b6d14a5-ca00-40c7-af7a-051a98a24eed","Type":"ContainerStarted","Data":"f5448bbd399a804f0d337dc80e2934537caad084410619749d505338341e4c6b"} Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.819442 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"25e251c91f998883cec92448e57ffcbd0f46f7190f3879fe24b99ae2240a1795"} Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.819502 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"1f545b8ddc782d53d87fe35eec3465c163e0c794d0304042083c5dc432272a3d"} Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.820200 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dn27q" event={"ID":"6a23c0ee-5648-448c-b772-83dced2891ce","Type":"ContainerStarted","Data":"8135e46c9e125bca8da3f36c0e884781e2cbd1b9d814612a71aa2fc1561e032a"} Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.820844 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"14ca377bac75504671acbdc4a7e3e88249b30e700dae04ede2668327262e6530"} Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.827193 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l92hr" event={"ID":"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e","Type":"ContainerStarted","Data":"e0c4dc6071f9a69765e2181d8a7550a1708ac3cbcff77718b7f82fa227f5e219"} Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.841189 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:45Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b760c695e7b8d3b262f04daa7a579fb228f4e1fba51fb41f3c911344215f5864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://76c8ec04c94ffc96c251b87a684e50a3368f4910a2a6466207d7c8611931532b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://33e826ced05ba3c8ba4954263716756b01c01d91e16ff0add1ae912a03b99218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58bf291296d90004ce1675b9fa94da22f32b2b341dd1e9677056090525d91beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:46Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3687282c2f2ca81897c70f48cdba7f5db4e27c5539c8d2b3ca4b0287e477f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3687282c2f2ca81897c70f48cdba7f5db4e27c5539c8d2b3ca4b0287e477f56c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7df873f5567fc3299275c7a27c8a0994e34849d68f9e3871d7dd4ff67182bcc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df873f5567fc3299275c7a27c8a0994e34849d68f9e3871d7dd4ff67182bcc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d5164271fb92cac86e96e1e9b808f31b2d6e015504ce0df9f212f8ec6ec30f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5164271fb92cac86e96e1e9b808f31b2d6e015504ce0df9f212f8ec6ec30f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:45Z\\\"}}}],\\\"startTime\\\":\\\"2025-12-03T00:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.842207 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.842260 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.842719 3561 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.842749 3561 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.842762 3561 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.842829 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.842809377 +0000 UTC m=+22.623243635 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.842841 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" event={"ID":"9fb762d1-812f-43f1-9eac-68034c1ecec7","Type":"ContainerStarted","Data":"2c29474c6d82bd802f5567d9af98523d8bfe2c37b6e1750f6ce9187f3b56b306"} Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.852016 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.852059 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.852075 3561 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.852143 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.85212346 +0000 UTC m=+22.632557718 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.854131 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"41579a1131390b5dea8ca08527f669b3bac7b08c716f5722999e2f2e1ef44d6f"} Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.854172 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"68673c5b9544fdcdf104ce267533e4bd67522b08abb5ff0f8063f8513b36a152"} Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.855965 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"bf14aa28bb7abb214d55492826dd576aabc22d3040d6dd6a0ead346d503bc720"} Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.855994 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"f58f86061c87170f55e631a5e07926adf07f6e8e03f6edaa5d6a388c496a37e5"} Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.879940 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerStarted","Data":"6c987bd11b714317cb0c65f2e3008cf24d04da5833d5a8a92fcc6645d9ba2a8a"} Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.884822 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:16Z\\\"}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.895246 3561 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d" exitCode=0 Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.895355 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d"} Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.895377 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605"} Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.899423 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"ce34fdd85e35b653fbab04b9ce9f34e4e936d2393255142d0343ee095dc0a473"} Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.916206 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943186 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943230 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943251 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943272 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943325 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943347 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943366 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943387 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943407 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943428 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943448 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943478 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943500 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943523 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943567 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943588 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943609 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943628 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943648 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943692 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943714 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943735 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943754 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943778 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943815 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943853 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943875 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943894 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943934 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943954 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.943993 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.944014 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.944044 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.944065 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.944092 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.944114 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.944133 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.944155 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.944177 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.944197 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.944229 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944232 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.944258 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.944280 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944300 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.944282923 +0000 UTC m=+23.724717181 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944358 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944391 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.944379286 +0000 UTC m=+23.724813544 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944436 3561 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944457 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.944449898 +0000 UTC m=+23.724884156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944492 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944509 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.94450392 +0000 UTC m=+23.724938178 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944550 3561 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944571 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.944565642 +0000 UTC m=+23.724999900 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944598 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944607 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944675 3561 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944684 3561 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944618 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.944611023 +0000 UTC m=+23.725045281 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944704 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.944697716 +0000 UTC m=+23.725131974 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944742 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.944718807 +0000 UTC m=+23.725153065 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944755 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944767 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944776 3561 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944794 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.944789029 +0000 UTC m=+23.725223287 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944822 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944835 3561 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944850 3561 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944856 3561 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944877 3561 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944862 3561 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945518 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944839 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.94483359 +0000 UTC m=+23.725267848 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945572 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.945552532 +0000 UTC m=+23.725986860 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945590 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.945581823 +0000 UTC m=+22.726016081 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945594 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944925 3561 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944960 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945682 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.945597654 +0000 UTC m=+23.726031912 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945690 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945698 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.945692477 +0000 UTC m=+23.726126735 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945712 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.945705647 +0000 UTC m=+23.726139905 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945732 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.945724078 +0000 UTC m=+23.726158326 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.945780 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944642 3561 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945804 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.94579576 +0000 UTC m=+23.726230018 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945817 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.94581141 +0000 UTC m=+23.726245668 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945000 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945010 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945857 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.945838131 +0000 UTC m=+23.726272379 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945877 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.945868292 +0000 UTC m=+23.726302550 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945058 3561 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945067 3561 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945037 3561 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945134 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945166 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945165 3561 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945210 3561 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945226 3561 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945256 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946090 3561 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945300 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945327 3561 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946149 3561 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945374 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946201 3561 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945399 3561 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945444 3561 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946238 3561 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946280 3561 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945661 3561 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.944978 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945911 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945969 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.945961515 +0000 UTC m=+23.726395773 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946405 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.946380557 +0000 UTC m=+23.726814815 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946417 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.946411088 +0000 UTC m=+23.726845346 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.945088 3561 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946473 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.946466709 +0000 UTC m=+23.726900957 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946483 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.94647847 +0000 UTC m=+23.726912728 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946492 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.94648762 +0000 UTC m=+23.726921868 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946505 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.94649918 +0000 UTC m=+23.726933438 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946516 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.946510281 +0000 UTC m=+23.726944539 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946526 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.946521541 +0000 UTC m=+23.726955789 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946549 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.946532251 +0000 UTC m=+23.726966509 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946566 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.946559962 +0000 UTC m=+23.726994220 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946581 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.946576663 +0000 UTC m=+23.727010921 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946597 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.946591373 +0000 UTC m=+23.727025631 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946614 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.946608754 +0000 UTC m=+23.727043012 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946632 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.946626304 +0000 UTC m=+23.727060562 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946648 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.946642105 +0000 UTC m=+23.727076363 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.946712 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946750 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.946743798 +0000 UTC m=+23.727178056 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946764 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.946757018 +0000 UTC m=+23.727191406 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946778 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.946771659 +0000 UTC m=+23.727206047 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946793 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.946784559 +0000 UTC m=+23.727218947 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946805 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.946799849 +0000 UTC m=+23.727234227 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946818 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.94681157 +0000 UTC m=+23.727245948 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.946829 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.94682352 +0000 UTC m=+23.727257898 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.946877 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.946924 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947000 3561 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947002 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947030 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.947022236 +0000 UTC m=+23.727456494 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947044 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.947037737 +0000 UTC m=+23.727471985 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.947049 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947074 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947136 3561 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947160 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.94714299 +0000 UTC m=+23.727577248 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947159 3561 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.947079 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947178 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.947171111 +0000 UTC m=+23.727605369 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947200 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.947193342 +0000 UTC m=+23.727627600 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947223 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.947206942 +0000 UTC m=+23.727641310 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.947265 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.947325 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.947383 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947388 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947416 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.947408838 +0000 UTC m=+23.727843086 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947429 3561 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.947446 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947454 3561 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947459 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.947452059 +0000 UTC m=+23.727886317 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947490 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.94748118 +0000 UTC m=+23.727915438 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.947491 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947498 3561 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947526 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.947520652 +0000 UTC m=+23.727954900 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947526 3561 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947575 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.947569373 +0000 UTC m=+23.728003631 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.947603 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.947641 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.947682 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947702 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.947717 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947746 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947755 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.947741348 +0000 UTC m=+23.728175606 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947710 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947780 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947782 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.947772719 +0000 UTC m=+23.728206977 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947799 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.94779316 +0000 UTC m=+23.728227418 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947815 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.94780626 +0000 UTC m=+23.728240628 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.947848 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.947896 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.947937 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947939 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.947967 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.947958875 +0000 UTC m=+23.728393133 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.947995 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948006 3561 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948021 3561 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948043 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.948033677 +0000 UTC m=+23.728468045 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948063 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.948054548 +0000 UTC m=+23.728488816 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948044 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948077 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948084 3561 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948102 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.948096739 +0000 UTC m=+23.728530997 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948116 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.94810994 +0000 UTC m=+23.728544198 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948102 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948138 3561 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948146 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948167 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.948159131 +0000 UTC m=+23.728593489 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948186 3561 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948195 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948205 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.948200402 +0000 UTC m=+23.728634660 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948229 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948250 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948262 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948280 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.948274755 +0000 UTC m=+23.728709013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948301 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948314 3561 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948323 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948344 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.948334627 +0000 UTC m=+23.728768995 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948356 3561 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948372 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948375 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.948369788 +0000 UTC m=+23.728804046 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948396 3561 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948413 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.948408729 +0000 UTC m=+23.728842987 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948423 3561 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948444 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948451 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.94844327 +0000 UTC m=+23.728877648 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948478 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948510 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948526 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948530 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.948524652 +0000 UTC m=+23.728958910 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948566 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948574 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948595 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.948589574 +0000 UTC m=+23.729023832 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948598 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948623 3561 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948630 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948645 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.948636596 +0000 UTC m=+23.729070854 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948481 3561 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948694 3561 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948699 3561 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948715 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.948709438 +0000 UTC m=+23.729143696 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948667 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948752 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948673 3561 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948727 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.948720938 +0000 UTC m=+23.729155196 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948824 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.9487951 +0000 UTC m=+23.729229358 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948895 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948924 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948927 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.948917403 +0000 UTC m=+23.729351651 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.948947 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.948941834 +0000 UTC m=+23.729376092 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948950 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.948984 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949014 3561 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949021 3561 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949036 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.949029407 +0000 UTC m=+23.729463665 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949047 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.949041687 +0000 UTC m=+23.729475945 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949068 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949080 3561 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949088 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.949082959 +0000 UTC m=+23.729517217 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949100 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.949095309 +0000 UTC m=+23.729529567 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949119 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949161 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949184 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949212 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949234 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949257 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949281 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949304 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949328 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949350 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949390 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949413 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949435 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949458 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949479 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949500 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949519 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949556 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.949582 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949653 3561 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949674 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.949668536 +0000 UTC m=+23.730102794 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949705 3561 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949723 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.949718038 +0000 UTC m=+23.730152296 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949754 3561 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949772 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.949765929 +0000 UTC m=+23.730200187 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949802 3561 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949820 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.949814891 +0000 UTC m=+23.730249149 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949865 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949886 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.949879393 +0000 UTC m=+23.730313641 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949914 3561 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949949 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.949926934 +0000 UTC m=+23.730361192 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.949979 3561 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950020 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.950014757 +0000 UTC m=+23.730449015 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950054 3561 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950071 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.950065769 +0000 UTC m=+23.730500027 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950099 3561 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950118 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.95011169 +0000 UTC m=+23.730545948 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950143 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950165 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.950158141 +0000 UTC m=+23.730592399 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950397 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.950389509 +0000 UTC m=+23.730823767 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950428 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950446 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.95044082 +0000 UTC m=+23.730875078 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950497 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950508 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950519 3561 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950557 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:03.950535003 +0000 UTC m=+22.730969261 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950586 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950623 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.950617566 +0000 UTC m=+23.731051824 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950652 3561 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950671 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.950665887 +0000 UTC m=+23.731100145 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950696 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950712 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.950707698 +0000 UTC m=+23.731141956 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950743 3561 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950762 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.95075703 +0000 UTC m=+23.731191288 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950796 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950817 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.950811602 +0000 UTC m=+23.731245860 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950846 3561 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: E1203 00:07:02.950867 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.950859313 +0000 UTC m=+23.731293571 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.955596 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:02 crc kubenswrapper[3561]: I1203 00:07:02.997953 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.037260 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6ce3128562801aa3c24e80d49667d2d193ade88fcdf509085e61d3d048041e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T20:05:34Z\\\",\\\"message\\\":\\\" Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125\\\\nI0813 19:59:36.141079 1 status.go:99] Syncing status: available\\\\nI0813 19:59:36.366889 1 status.go:69] Syncing status: re-syncing\\\\nI0813 19:59:36.405968 1 sync.go:75] Provider is NoOp, skipping synchronisation\\\\nI0813 19:59:36.451686 1 status.go:99] Syncing status: available\\\\nE0813 20:01:53.428030 1 leaderelection.go:369] Failed to update lock: Operation cannot be fulfilled on leases.coordination.k8s.io \\\\\\\"machine-api-operator\\\\\\\": the object has been modified; please apply your changes to the latest version and try again\\\\nE0813 20:02:53.432992 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get \\\\\\\"https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nE0813 20:03:53.443054 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get \\\\\\\"https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nE0813 20:04:53.434088 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get \\\\\\\"https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nI0813 20:05:34.050754 1 leaderelection.go:285] failed to renew lease openshift-machine-api/machine-api-operator: timed out waiting for the condition\\\\nE0813 20:05:34.147127 1 leaderelection.go:308] Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io \\\\\\\"machine-api-operator\\\\\\\": the object has been modified; please apply your changes to the latest version and try again\\\\nF0813 20:05:34.165368 1 start.go:104] Leader election lost\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:12Z\\\"}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.051376 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.051418 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.051465 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.051553 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.051648 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.051676 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.051730 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.051982 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052004 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052014 3561 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052051 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.05203898 +0000 UTC m=+23.832473238 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052095 3561 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052104 3561 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052110 3561 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052130 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.052123993 +0000 UTC m=+22.832558251 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052180 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052194 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052203 3561 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052232 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.052223226 +0000 UTC m=+22.832657484 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052274 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052285 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052295 3561 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052322 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.052313848 +0000 UTC m=+22.832748106 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052375 3561 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052388 3561 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052395 3561 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052416 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.05241024 +0000 UTC m=+22.832844498 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052458 3561 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052467 3561 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052474 3561 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052492 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.052487183 +0000 UTC m=+23.832921441 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052529 3561 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052560 3561 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052569 3561 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.052596 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.052587706 +0000 UTC m=+23.833021964 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.076948 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:07Z\\\"}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.132782 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.153631 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.153668 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.153748 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.153852 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.153850 3561 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.153882 3561 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.153894 3561 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.153921 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.153971 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.153930208 +0000 UTC m=+22.934364466 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154054 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154070 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.154070 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154079 3561 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154128 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.154104633 +0000 UTC m=+23.934538891 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154147 3561 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154163 3561 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154172 3561 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154239 3561 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154251 3561 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154258 3561 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154286 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.154279249 +0000 UTC m=+22.934713497 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154316 3561 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154328 3561 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154335 3561 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154365 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.154358241 +0000 UTC m=+23.934792499 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154411 3561 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154421 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154451 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.154445044 +0000 UTC m=+23.934879302 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.154464 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:04.154458294 +0000 UTC m=+22.934892552 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.256649 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.256724 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.256763 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.256845 3561 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.256876 3561 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.256889 3561 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.256939 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.2569236 +0000 UTC m=+24.037357858 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.256878 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.256971 3561 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.256989 3561 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.257001 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.257052 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.257033993 +0000 UTC m=+24.037468291 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.257085 3561 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.257096 3561 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.257101 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.257104 3561 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.257132 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.257125136 +0000 UTC m=+24.037559394 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.257169 3561 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.257178 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.257200 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.257193578 +0000 UTC m=+24.037627836 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.257245 3561 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.257257 3561 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.257264 3561 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.257284 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.25727846 +0000 UTC m=+24.037712719 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.395533 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.395967 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.396504 3561 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.396871 3561 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.396887 3561 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.396954 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.396933424 +0000 UTC m=+24.177367732 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.396822 3561 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.396990 3561 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.397001 3561 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.397049 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.397037387 +0000 UTC m=+24.177471705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.498783 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.498829 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.498861 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.498930 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.498955 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.499012 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499063 3561 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499091 3561 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499106 3561 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499158 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.499140232 +0000 UTC m=+24.279574550 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499222 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499237 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499246 3561 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499305 3561 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499315 3561 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499321 3561 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499347 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.499334378 +0000 UTC m=+24.279768636 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499386 3561 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499395 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499416 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.49940869 +0000 UTC m=+24.279842948 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499458 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499474 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499483 3561 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499515 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.499504233 +0000 UTC m=+24.279938561 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499577 3561 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499594 3561 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499603 3561 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499635 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.499625467 +0000 UTC m=+24.280059815 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.499883 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.499873625 +0000 UTC m=+24.280307973 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.601768 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.601936 3561 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.601973 3561 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.601995 3561 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.602037 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.60202336 +0000 UTC m=+24.382457618 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.602139 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.602169 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.602184 3561 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.602251 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.602231997 +0000 UTC m=+24.382666265 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.602150 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.663701 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.663735 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.663739 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.663701 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.663770 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.663776 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.663818 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.663869 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.663880 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.663900 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.663912 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.663921 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.663953 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.663960 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.663954 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664043 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.664055 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664094 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664100 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664125 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664124 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664137 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664133 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664150 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664135 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664146 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664168 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664172 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664181 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664191 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664193 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664197 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664215 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664217 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664227 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664233 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664238 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.664316 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.664375 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.664432 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.664507 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.664582 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.664596 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.664653 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.664711 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.664789 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.664851 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.664914 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.664973 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.665029 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.665091 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.665149 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.665204 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.665263 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.665351 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.665428 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.665502 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.665586 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.665658 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.665757 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.665810 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.665858 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.665914 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.665962 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.666057 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.666106 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.666156 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.666332 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.666488 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.666646 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.666781 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.666894 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.667891 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.668199 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.705087 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.705256 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.705322 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.706178 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.706208 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.706227 3561 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.706291 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.70626939 +0000 UTC m=+24.486703688 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.706384 3561 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.706405 3561 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.706419 3561 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.706467 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.706452246 +0000 UTC m=+24.486886544 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.706576 3561 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.706598 3561 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.706612 3561 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.706656 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.706642462 +0000 UTC m=+24.487076760 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.810712 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.810789 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.810929 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.810975 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.811740 3561 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.811776 3561 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.811794 3561 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.811823 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.811857 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.81183711 +0000 UTC m=+24.592271398 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.811862 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.811882 3561 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.811983 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.811995 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.812003 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.812056 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.812037356 +0000 UTC m=+24.592471614 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.812260 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.812251213 +0000 UTC m=+24.592685471 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.812607 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.812752 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.812911 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.813053 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.813032647 +0000 UTC m=+24.593466945 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.905960 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dn27q" event={"ID":"6a23c0ee-5648-448c-b772-83dced2891ce","Type":"ContainerStarted","Data":"95baddad739c2f1d687b1a8a49dbaa82efbc381889450176d862489784432569"} Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.913161 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.913252 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.930381 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.930747 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.930837 3561 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.930499 3561 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.931056 3561 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.931077 3561 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.930991 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.930967062 +0000 UTC m=+24.711401330 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: E1203 00:07:03.931463 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.931441425 +0000 UTC m=+24.711875693 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.936688 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" event={"ID":"9fb762d1-812f-43f1-9eac-68034c1ecec7","Type":"ContainerStarted","Data":"a224155bd9a6d4710dda8f5c00c6e879aec735776c05ca0da089787e0e6821d0"} Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.941060 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"084ba226ca0b5dd051014bb4e47312eb2167bafad7061d8ac2052506fd398e79"} Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.941104 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"113805abfdc6c501aa825a452eb1d62ca3a6d97dc80e8b0884d3cb087f419251"} Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.943665 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"480f0970e9e9dc9b9af0dc4fbf13231ac94f2e6658d265a517c39d1ae9f0323c"} Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.947035 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"f5e5b9d42d79dca449db612b71228ec42a527a5e9127fa72c11a7ace9f5a2262"} Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.947070 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"bd0a4dec528dacd32559f3d2049827772a46c70452b3e8351a29d5d590e90e90"} Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.950483 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"a6664b2bf942257cc5392f17e2e341ac4581ab92ff02ceb4482efe92d0af9629"} Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.950571 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"9b5c29e468af30f053fbf43612a847195bb7c90ce2a6b636a130574976ab9484"} Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.953513 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"7816ef0cdcc4b73e8f2e0c858f479182b1bffe7d6011eb8ba3bcb3edb15d39a3"} Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.954858 3561 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="bf14aa28bb7abb214d55492826dd576aabc22d3040d6dd6a0ead346d503bc720" exitCode=0 Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.954910 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"bf14aa28bb7abb214d55492826dd576aabc22d3040d6dd6a0ead346d503bc720"} Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.957660 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerStarted","Data":"9cb8d9fcb14b71ebb4bf0e69384664629183edba90938cf181199425d6163289"} Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.962840 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8"} Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.962862 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a"} Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.962871 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a"} Dec 03 00:07:03 crc kubenswrapper[3561]: I1203 00:07:03.964102 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l92hr" event={"ID":"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e","Type":"ContainerStarted","Data":"aace61caed5cb85a800348d7364adfc185bf96b1b92fd68726021af3d25e5fe3"} Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.032710 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.032846 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.035311 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.035340 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.035352 3561 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.035398 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:06.035382376 +0000 UTC m=+24.815816634 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.035453 3561 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.035467 3561 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.035475 3561 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.035502 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:06.035493699 +0000 UTC m=+24.815927957 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.137831 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.137898 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.137944 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.138051 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.139784 3561 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.139804 3561 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.139814 3561 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.139850 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:06.139837622 +0000 UTC m=+24.920271880 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.140328 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.140345 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.140352 3561 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.140376 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:06.140369169 +0000 UTC m=+24.920803427 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.140899 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.140916 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.140923 3561 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.140963 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:06.140955617 +0000 UTC m=+24.921389875 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.141003 3561 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.141012 3561 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.141035 3561 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.141055 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:06.14104816 +0000 UTC m=+24.921482418 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.240578 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.240962 3561 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.241048 3561 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.241742 3561 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.241866 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:06.241849865 +0000 UTC m=+25.022284123 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.242303 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.242404 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.242514 3561 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.242606 3561 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.242618 3561 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.242648 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.242703 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:06.24268647 +0000 UTC m=+25.023120758 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.242624 3561 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.242783 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:06.242761772 +0000 UTC m=+25.023196040 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.665010 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.665676 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.666880 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.667007 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.667056 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.667114 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.667146 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.667205 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.667205 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.667238 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.667313 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.667340 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.667347 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.667374 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.667423 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.667476 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.667512 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.667636 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.667788 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.667942 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.732893 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.740581 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:04 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:04 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:04 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.740913 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.968654 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"2f6ade1aa7d719e2796a077f4222ad47d1eebd43555c7c6f09d823e9a5efd7ca"} Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.971568 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.971649 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.971792 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.971914 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.971877273 +0000 UTC m=+27.752311531 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.971959 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.971989 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.972012 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.972039 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972054 3561 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.972069 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.972120 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972125 3561 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.972156 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972172 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.972158882 +0000 UTC m=+27.752593230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972201 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.972215 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972232 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.972221004 +0000 UTC m=+27.752655262 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.972259 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972268 3561 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972288 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.972282586 +0000 UTC m=+27.752716844 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.972294 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972316 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.972328 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972334 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.972328727 +0000 UTC m=+27.752762975 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.972368 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.972406 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.972443 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.972488 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.972527 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972556 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972581 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.972573295 +0000 UTC m=+27.753007553 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972593 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.972588005 +0000 UTC m=+27.753022263 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972637 3561 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972665 3561 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972673 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.972663327 +0000 UTC m=+27.753097665 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972692 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.972684338 +0000 UTC m=+27.753118696 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972708 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.972699638 +0000 UTC m=+27.753134026 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972747 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972778 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.972767831 +0000 UTC m=+27.753202199 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972836 3561 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972859 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972868 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.972856783 +0000 UTC m=+27.753291151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972884 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.972876963 +0000 UTC m=+27.753311331 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972902 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972922 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.972916144 +0000 UTC m=+27.753350402 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972923 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972947 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.972941995 +0000 UTC m=+27.753376253 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972953 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972972 3561 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.972983 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.972975196 +0000 UTC m=+27.753409444 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.973001 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.972992597 +0000 UTC m=+27.753426955 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.972640 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.973011 3561 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.973035 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.973029448 +0000 UTC m=+27.753463706 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973043 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973076 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973107 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973138 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973167 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973197 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973242 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973276 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973313 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973356 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973387 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973419 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973450 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973481 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973608 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026"} Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973626 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025"} Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973625 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973660 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.973673 3561 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973683 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973706 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.973708 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.973698608 +0000 UTC m=+27.754132956 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973736 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.973763 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973775 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.973791 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.973783441 +0000 UTC m=+27.754217799 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.973809 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.973831 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.973824122 +0000 UTC m=+27.754258380 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973849 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973871 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973891 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973929 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973952 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.973991 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.974012 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.974068 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.974090 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974113 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974148 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.974138562 +0000 UTC m=+27.754572920 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974151 3561 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974185 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.974177663 +0000 UTC m=+27.754612011 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974221 3561 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974247 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.974238765 +0000 UTC m=+27.754673123 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974281 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974305 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.974297997 +0000 UTC m=+27.754732355 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974341 3561 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974370 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.974362329 +0000 UTC m=+27.754796587 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974409 3561 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974438 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.974430751 +0000 UTC m=+27.754865229 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974476 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974502 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.974494283 +0000 UTC m=+27.754928651 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974556 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974586 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.974577425 +0000 UTC m=+27.755011793 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974626 3561 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974653 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.974645257 +0000 UTC m=+27.755079615 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974686 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974714 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.974707169 +0000 UTC m=+27.755141427 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974746 3561 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974774 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.974763841 +0000 UTC m=+27.755198209 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974811 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974837 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.974828903 +0000 UTC m=+27.755263261 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974869 3561 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974891 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.974883775 +0000 UTC m=+27.755318133 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974926 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.974950 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.974943306 +0000 UTC m=+27.755377664 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.974120 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.975012 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.975042 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.975071 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.975107 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.975138 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.975165 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.975206 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.975238 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.975268 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.975315 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.975378 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.975443 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.975471 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.975493 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.975515 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.975562 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.976685 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.976746 3561 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.976796 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.976778982 +0000 UTC m=+27.757213230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.976812 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.976805172 +0000 UTC m=+27.757239430 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.976824 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.976879 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.976920 3561 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.976944 3561 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.977232 3561 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.977248 3561 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.977270 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.976962127 +0000 UTC m=+27.757396425 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.977334 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.977318738 +0000 UTC m=+27.757753026 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.977359 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.977349449 +0000 UTC m=+27.757783697 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.977390 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.977614 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.977731 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.977870 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.977903 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.977922 3561 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.977998 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.978145 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.978135963 +0000 UTC m=+27.758570211 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.978223 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.978202655 +0000 UTC m=+27.758636953 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.977874 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.978317 3561 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.978435 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.978423882 +0000 UTC m=+27.758858140 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.978516 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.978583 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.978760 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.978748431 +0000 UTC m=+27.759182689 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.978888 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.978907 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.979214 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.979243 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.979317 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.979385 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.97937202 +0000 UTC m=+27.759806278 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.979471 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.979496 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.979474493 +0000 UTC m=+27.759908751 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.979698 3561 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: I1203 00:07:04.979791 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.980065 3561 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.980468 3561 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.980279 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.980732 3561 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.981013 3561 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.981148 3561 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.981593 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.981599 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.981756 3561 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.982225 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.982483 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.982630 3561 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.982904 3561 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.982999 3561 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.983236 3561 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.983483 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.983795 3561 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.984123 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.989772 3561 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.990209 3561 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 00:07:04 crc kubenswrapper[3561]: E1203 00:07:04.990421 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.010181 3561 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:04.979833 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:08.979820534 +0000 UTC m=+27.760254792 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.012566 3561 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.013401 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.013588 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.013642 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.013375121 +0000 UTC m=+27.793809389 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.013698 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.01368158 +0000 UTC m=+27.794115838 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.013698 3561 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.013731 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.013708811 +0000 UTC m=+27.794143069 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.013733 3561 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.013750 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.013740542 +0000 UTC m=+27.794174800 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.013769 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.013759623 +0000 UTC m=+27.794193891 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.013795 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.013777223 +0000 UTC m=+27.794211481 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.013821 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.013805664 +0000 UTC m=+27.794239932 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.013851 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.013830715 +0000 UTC m=+27.794264983 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.013867 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.013858346 +0000 UTC m=+27.794292624 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.013882 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.013873796 +0000 UTC m=+27.794308064 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.013903 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.013895507 +0000 UTC m=+27.794329785 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.013919 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.013910807 +0000 UTC m=+27.794345075 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.013936 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.013928108 +0000 UTC m=+27.794362376 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.013957 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.013948929 +0000 UTC m=+27.794383197 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014055 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014027581 +0000 UTC m=+27.794461849 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014071 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014063302 +0000 UTC m=+27.794497580 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014092 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014084013 +0000 UTC m=+27.794518281 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014109 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014100233 +0000 UTC m=+27.794534501 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014126 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014117564 +0000 UTC m=+27.794551832 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014137 3561 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014153 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014138654 +0000 UTC m=+27.794572922 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014170 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014160885 +0000 UTC m=+27.794595153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014190 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014181486 +0000 UTC m=+27.794615764 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014286 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014276749 +0000 UTC m=+27.794711017 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014312 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014293689 +0000 UTC m=+27.794727967 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014335 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.01432114 +0000 UTC m=+27.794755408 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.014366 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.014416 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014443 3561 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.014460 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014479 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014464354 +0000 UTC m=+27.794898612 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014502 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014491735 +0000 UTC m=+27.794925993 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014518 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014510626 +0000 UTC m=+27.794944884 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014563 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014532656 +0000 UTC m=+27.794966924 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.014596 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014612 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014616 3561 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.014641 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014663 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.01463934 +0000 UTC m=+27.795073618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.014690 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014706 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014687301 +0000 UTC m=+27.795121559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014758 3561 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014803 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014787874 +0000 UTC m=+27.795222142 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014847 3561 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014880 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014871806 +0000 UTC m=+27.795306064 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014939 3561 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.014970 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.014974 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.014960989 +0000 UTC m=+27.795395247 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.015017 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.015070 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015076 3561 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.015110 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015112 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015130 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.015111343 +0000 UTC m=+27.795545591 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015177 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015204 3561 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015204 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.015196586 +0000 UTC m=+27.795630844 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015226 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.015218846 +0000 UTC m=+27.795653104 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.015200 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015245 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.015235917 +0000 UTC m=+27.795670175 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015253 3561 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015342 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.015276628 +0000 UTC m=+27.795710886 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015396 3561 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015439 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.015428923 +0000 UTC m=+27.795863191 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.015506 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.015582 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.015630 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.015659 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015668 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015708 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.015698251 +0000 UTC m=+27.796132509 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.015744 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015748 3561 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.015788 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015805 3561 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.015819 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015832 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.015824255 +0000 UTC m=+27.796258513 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.015857 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.015909 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015926 3561 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015951 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.015923518 +0000 UTC m=+27.796357776 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.015989 3561 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.016359 3561 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.016431 3561 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.016483 3561 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.017090 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.017129 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.017228 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.017396 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.017361032 +0000 UTC m=+27.797795290 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.017262 3561 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.017649 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.01762823 +0000 UTC m=+27.798062498 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.017733 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.017721482 +0000 UTC m=+27.798155740 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.017756 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.017749173 +0000 UTC m=+27.798183431 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.017767 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.017762404 +0000 UTC m=+27.798196662 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.017783 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.017778154 +0000 UTC m=+27.798212412 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.017980 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.018147 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.018133595 +0000 UTC m=+27.798567853 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.018715 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.018703222 +0000 UTC m=+27.799137480 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.019581 3561 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.019618 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.01960907 +0000 UTC m=+27.800043328 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.119065 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.119138 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.119232 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.119271 3561 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.119295 3561 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.119305 3561 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.119596 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.119614 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.119621 3561 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.119652 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.119637882 +0000 UTC m=+27.900072140 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.119726 3561 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.119737 3561 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.119743 3561 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.119764 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.119758256 +0000 UTC m=+27.900192514 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.120049 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.120041924 +0000 UTC m=+27.900476182 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.223606 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.223737 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.223797 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.224876 3561 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.224903 3561 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.224916 3561 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.224970 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.224952824 +0000 UTC m=+28.005387092 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.225038 3561 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.225052 3561 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.225061 3561 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.225093 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.225083438 +0000 UTC m=+28.005517696 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.225173 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.225188 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.225197 3561 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.225228 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.225218682 +0000 UTC m=+28.005652940 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.327272 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.327363 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.327507 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.327558 3561 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.327616 3561 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.327622 3561 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.327633 3561 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.327649 3561 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.327669 3561 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.327696 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.327702 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.327681358 +0000 UTC m=+28.108115666 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.327749 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.32772946 +0000 UTC m=+28.108163778 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.327777 3561 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.327792 3561 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.327802 3561 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.327835 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.327822543 +0000 UTC m=+28.108256801 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.327845 3561 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.327864 3561 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.327874 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.328074 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.32806132 +0000 UTC m=+28.108495578 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.328204 3561 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.328217 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.328261 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.328250266 +0000 UTC m=+28.108684524 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.328305 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.430463 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.431729 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.431859 3561 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.431879 3561 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.431889 3561 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.431926 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.431914268 +0000 UTC m=+28.212348526 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.431968 3561 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.431978 3561 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.431984 3561 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.432003 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.43199721 +0000 UTC m=+28.212431468 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.532717 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.533713 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.533744 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.533773 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.533844 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.533867 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.533981 3561 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.533996 3561 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534005 3561 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534042 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.534030243 +0000 UTC m=+28.314464501 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534114 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534125 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534132 3561 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534153 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.534146116 +0000 UTC m=+28.314580374 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534192 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534202 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534208 3561 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534228 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.534222789 +0000 UTC m=+28.314657047 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534265 3561 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534274 3561 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534280 3561 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534299 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.534294201 +0000 UTC m=+28.314728459 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534334 3561 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534343 3561 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534349 3561 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534367 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.534361843 +0000 UTC m=+28.314796101 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534404 3561 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534411 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.534429 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.534423575 +0000 UTC m=+28.314857833 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.636651 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.636751 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.636992 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.637036 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.637059 3561 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.637168 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.637135488 +0000 UTC m=+28.417569786 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.637296 3561 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.637337 3561 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.637358 3561 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.637461 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.637437427 +0000 UTC m=+28.417871725 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.667277 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.667451 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.667508 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.667604 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.667652 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.667738 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.667783 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.667859 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.667901 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.667985 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.668050 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.668151 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.668206 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.668284 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.668337 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.668414 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.668465 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.668579 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.668662 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.668713 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.668750 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.668798 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.668906 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.668928 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.668981 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.669008 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.669064 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.669096 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.669141 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.669176 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.669211 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.669296 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.669323 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.669354 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.669391 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.669332 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.669728 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.669802 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.669934 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.670094 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.670269 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.670352 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.670471 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.670502 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.670669 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.670849 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.670987 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.671119 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.671208 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.671268 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.671302 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.671356 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.671385 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.671476 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.671649 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.671743 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.671846 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.671933 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.672018 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.672076 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.672143 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.672320 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.672480 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.672607 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.671360 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.672769 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.672891 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.672994 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.673116 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.673152 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.673235 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.673331 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.673464 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.673609 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.735466 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:05 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:05 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:05 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.735511 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.739000 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.739139 3561 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.739156 3561 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.739166 3561 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.739226 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.739211923 +0000 UTC m=+28.519646181 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.739329 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.739476 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.739403 3561 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.739525 3561 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.739531 3561 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.739590 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.739582044 +0000 UTC m=+28.520016302 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.739676 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.739687 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.739694 3561 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.739740 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.739732369 +0000 UTC m=+28.520166627 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.843362 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.843418 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.843502 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.843562 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.843644 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.843669 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.843683 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.843734 3561 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.843779 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.843784 3561 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.843794 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.843807 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.843809 3561 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.843845 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.843827593 +0000 UTC m=+28.624261871 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.843898 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.843872204 +0000 UTC m=+28.624306502 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.843906 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.843920 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.843930 3561 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.843962 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.843951347 +0000 UTC m=+28.624385615 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.844128 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.844120012 +0000 UTC m=+28.624554280 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.946160 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.946204 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.946609 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.946630 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.946641 3561 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.946687 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.946675681 +0000 UTC m=+28.727109939 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.946613 3561 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.947721 3561 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.947804 3561 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: E1203 00:07:05.948457 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:09.948410893 +0000 UTC m=+28.728845151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.978403 3561 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="2f6ade1aa7d719e2796a077f4222ad47d1eebd43555c7c6f09d823e9a5efd7ca" exitCode=0 Dec 03 00:07:05 crc kubenswrapper[3561]: I1203 00:07:05.980336 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"2f6ade1aa7d719e2796a077f4222ad47d1eebd43555c7c6f09d823e9a5efd7ca"} Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.052308 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.052600 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.052645 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.052668 3561 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.052731 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.052755 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:10.052725966 +0000 UTC m=+28.833160264 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.052877 3561 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.052910 3561 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.052931 3561 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.053029 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:10.052982023 +0000 UTC m=+28.833416331 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.154432 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.154929 3561 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.154963 3561 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.154980 3561 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.155272 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:10.155215062 +0000 UTC m=+28.935649360 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.156943 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.157005 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.157041 3561 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.157065 3561 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.157078 3561 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.157130 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:10.157110809 +0000 UTC m=+28.937545137 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.157174 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.157277 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.157303 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.157322 3561 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.157388 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:10.157372407 +0000 UTC m=+28.937806705 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.157306 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.157428 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.157440 3561 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.157482 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:10.15746931 +0000 UTC m=+28.937903628 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.258456 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.258509 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.258584 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.258700 3561 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.258733 3561 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.258747 3561 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.258810 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:10.258791331 +0000 UTC m=+29.039225599 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.258894 3561 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.258913 3561 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.258924 3561 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.258969 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:10.258953736 +0000 UTC m=+29.039388084 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.259021 3561 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.259034 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.259063 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:10.259053769 +0000 UTC m=+29.039488037 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.671143 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.671352 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.671417 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.671502 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.671570 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.671644 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.671681 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.671739 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.671803 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.671867 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.671905 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.671963 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.671996 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.672048 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.672082 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.672144 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.672420 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.672509 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.672609 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:06 crc kubenswrapper[3561]: E1203 00:07:06.672686 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.744099 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:06 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:06 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:06 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.744212 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.982477 3561 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="4e41f51f872dce9110c53bf9a814c4305874fd576c557f8614b6f39667acba91" exitCode=0 Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.982863 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"4e41f51f872dce9110c53bf9a814c4305874fd576c557f8614b6f39667acba91"} Dec 03 00:07:06 crc kubenswrapper[3561]: I1203 00:07:06.992800 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa"} Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.638598 3561 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.640782 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.640976 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.641150 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.641467 3561 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.649906 3561 kubelet_node_status.go:116] "Node was previously registered" node="crc" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.650186 3561 kubelet_node_status.go:80] "Successfully registered node" node="crc" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.653621 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.653815 3561 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664316 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.665238 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664391 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664513 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664514 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664615 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664667 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664683 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664733 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664766 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664777 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664810 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664811 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664817 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664848 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664858 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664868 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664897 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664926 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664939 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664926 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664954 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664951 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664973 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664979 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.664980 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.665026 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.665018 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.665017 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.665046 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.665055 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.665073 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.665075 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.665101 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.665043 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.665140 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.670708 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.671155 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.671353 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.671527 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.671814 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.671977 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.672129 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.674017 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.674117 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.674229 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.674310 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.674390 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.674450 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.674619 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.674693 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.674752 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.674828 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.674932 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.674983 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.675032 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.675081 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.675139 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.675216 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.675270 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.675300 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.675593 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.675668 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.675731 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.675780 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.675803 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.675860 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.675912 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.676212 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.672422 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.673469 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.673669 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.673774 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:07 crc kubenswrapper[3561]: E1203 00:07:07.673902 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.736380 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:07 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:07 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:07 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:07 crc kubenswrapper[3561]: I1203 00:07:07.736468 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.000658 3561 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="853006c8491619122fffc895f8a088aeae61de2e90555407cc27e15747197a47" exitCode=0 Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.000716 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"853006c8491619122fffc895f8a088aeae61de2e90555407cc27e15747197a47"} Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.056666 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.063700 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.663528 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.663716 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.663779 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.663853 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.663892 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.663956 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.663990 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.664068 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.664103 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.664171 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.664199 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.664255 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.664289 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.664345 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.664372 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.664424 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.664456 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.664508 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.664553 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.664612 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.736115 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:08 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:08 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:08 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.736227 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.994747 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.994804 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.994834 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.995192 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.995234 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.995265 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.995310 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.995340 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.995830 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.995871 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.995903 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.995933 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.995962 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.995991 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996024 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996111 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996185 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996283 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996348 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996382 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996410 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996443 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996474 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996505 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996560 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996592 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996648 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996677 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996705 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996733 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996760 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996789 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996819 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996865 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996895 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996927 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996968 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.996996 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.997027 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.997054 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.997083 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.997149 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.997180 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.997209 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.997239 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.997299 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:08 crc kubenswrapper[3561]: I1203 00:07:08.997351 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.997507 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.997570 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.997554118 +0000 UTC m=+35.777988376 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.997682 3561 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.997719 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.997702722 +0000 UTC m=+35.778136980 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.997763 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.997789 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.997781645 +0000 UTC m=+35.778215903 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.997829 3561 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.997854 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.997846287 +0000 UTC m=+35.778280545 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.997894 3561 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.997918 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.997910899 +0000 UTC m=+35.778345157 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.997953 3561 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.997977 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.997969411 +0000 UTC m=+35.778403669 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998014 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998040 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.998031833 +0000 UTC m=+35.778466091 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998080 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998108 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.998099005 +0000 UTC m=+35.778533263 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998144 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998166 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.998159537 +0000 UTC m=+35.778593795 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998204 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998247 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.998221858 +0000 UTC m=+35.778656116 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998291 3561 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998316 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.998308541 +0000 UTC m=+35.778742799 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998356 3561 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998379 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.998372343 +0000 UTC m=+35.778806601 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998420 3561 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998444 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.998436945 +0000 UTC m=+35.778871213 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998484 3561 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998509 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.998501667 +0000 UTC m=+35.778935925 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998564 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998589 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.998581619 +0000 UTC m=+35.779015877 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998811 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.998802955 +0000 UTC m=+35.779237213 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998855 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998879 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.998871967 +0000 UTC m=+35.779306235 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998915 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998939 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.998931939 +0000 UTC m=+35.779366197 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998974 3561 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.998999 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.998991871 +0000 UTC m=+35.779426129 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999036 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999061 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.999053503 +0000 UTC m=+35.779487761 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999104 3561 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999130 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.999122785 +0000 UTC m=+35.779557053 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999171 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999195 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.999188337 +0000 UTC m=+35.779622595 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999231 3561 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999255 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.999246769 +0000 UTC m=+35.779681027 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999297 3561 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999336 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.999328091 +0000 UTC m=+35.779762349 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999373 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999398 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.999389313 +0000 UTC m=+35.779823571 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999435 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999459 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.999451565 +0000 UTC m=+35.779885823 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999494 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999519 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.999512117 +0000 UTC m=+35.779946375 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999579 3561 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999604 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.99959678 +0000 UTC m=+35.780031038 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999656 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999681 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.999674242 +0000 UTC m=+35.780108510 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 00:07:08 crc kubenswrapper[3561]: E1203 00:07:08.999722 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:08.999745 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.999738144 +0000 UTC m=+35.780172402 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:08.999782 3561 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:08.999806 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.999798976 +0000 UTC m=+35.780233234 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:08.999840 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:08.999864 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.999857128 +0000 UTC m=+35.780291386 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:08.999899 3561 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:08.999922 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.999915179 +0000 UTC m=+35.780349437 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:08.999963 3561 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:08.999986 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:16.999978931 +0000 UTC m=+35.780413189 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000021 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000047 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.000038153 +0000 UTC m=+35.780472411 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000090 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000116 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.000107615 +0000 UTC m=+35.780541873 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000159 3561 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000186 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.000177667 +0000 UTC m=+35.780611935 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000324 3561 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000363 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.000344582 +0000 UTC m=+35.780778850 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000402 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000427 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.000419555 +0000 UTC m=+35.780853813 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000476 3561 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000485 3561 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000509 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.000502277 +0000 UTC m=+35.780936535 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000575 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000602 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.00059328 +0000 UTC m=+35.781027538 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000638 3561 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000662 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.000655142 +0000 UTC m=+35.781089400 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000698 3561 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000720 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.000713284 +0000 UTC m=+35.781147552 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000763 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000784 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.000777706 +0000 UTC m=+35.781211964 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000842 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000855 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000864 3561 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000888 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.000881039 +0000 UTC m=+35.781315297 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000952 3561 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000980 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.000971662 +0000 UTC m=+35.781405920 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.000222 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.001020 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.001012693 +0000 UTC m=+35.781446951 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.006911 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"24358ec15ff59003d6631148049412bc93b5a21207acd8315e0475f455558be3"} Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.100460 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.100531 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.100618 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.100691 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.100724 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.100782 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.100838 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.100883 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.100912 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.100971 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101006 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101038 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101071 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101127 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101199 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101231 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101263 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101318 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101348 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101379 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101408 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101439 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101494 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101523 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101691 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101771 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101815 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101846 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101889 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101917 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101962 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.101993 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102031 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102064 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102095 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102125 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102154 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102185 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102267 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102300 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102329 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102382 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102425 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102455 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102485 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102563 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102596 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102626 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102655 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102689 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102731 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.102774 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.102920 3561 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.102971 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.102955023 +0000 UTC m=+35.883389291 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103199 3561 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103233 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.103222551 +0000 UTC m=+35.883656819 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103272 3561 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103297 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.103289993 +0000 UTC m=+35.883724261 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103347 3561 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103373 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.103365125 +0000 UTC m=+35.883799403 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103411 3561 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103455 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.103429117 +0000 UTC m=+35.883863395 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103496 3561 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103520 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.10351224 +0000 UTC m=+35.883946518 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103581 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103608 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.103600342 +0000 UTC m=+35.884034610 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103653 3561 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103676 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.103669414 +0000 UTC m=+35.884103692 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103716 3561 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103740 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.103732896 +0000 UTC m=+35.884167174 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103780 3561 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103803 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.103795868 +0000 UTC m=+35.884230136 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103845 3561 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103870 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.10386241 +0000 UTC m=+35.884296688 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103914 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103939 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.103931492 +0000 UTC m=+35.884365760 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.103981 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104007 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.103998824 +0000 UTC m=+35.884433092 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104048 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104071 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.104063556 +0000 UTC m=+35.884497834 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104127 3561 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104164 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.104151969 +0000 UTC m=+35.884586247 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104527 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104579 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.104569262 +0000 UTC m=+35.885003530 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104622 3561 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104646 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.104638814 +0000 UTC m=+35.885073082 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104705 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104733 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.104725697 +0000 UTC m=+35.885159965 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104776 3561 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104804 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.104793529 +0000 UTC m=+35.885227807 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104843 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104866 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.104858791 +0000 UTC m=+35.885293069 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104910 3561 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104934 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.104926573 +0000 UTC m=+35.885360841 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.104977 3561 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105003 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.104994155 +0000 UTC m=+35.885428423 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105044 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105069 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.105061916 +0000 UTC m=+35.885496184 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105112 3561 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105137 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.105129698 +0000 UTC m=+35.885563966 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105187 3561 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105224 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.105212151 +0000 UTC m=+35.885646419 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105272 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105299 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.105291983 +0000 UTC m=+35.885726261 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105342 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105366 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.105358065 +0000 UTC m=+35.885792333 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105408 3561 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105433 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.105426057 +0000 UTC m=+35.885860325 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105477 3561 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105501 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.105494159 +0000 UTC m=+35.885928427 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105567 3561 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105599 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.105589332 +0000 UTC m=+35.886023600 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105640 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105665 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.105658344 +0000 UTC m=+35.886092612 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105708 3561 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105735 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.105727616 +0000 UTC m=+35.886161894 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105778 3561 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105803 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.105795909 +0000 UTC m=+35.886230187 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105842 3561 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105867 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.105859871 +0000 UTC m=+35.886294139 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105904 3561 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105929 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.105920852 +0000 UTC m=+35.886355130 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105969 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.105998 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.105988795 +0000 UTC m=+35.886423063 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106039 3561 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106062 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.106054887 +0000 UTC m=+35.886489155 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106113 3561 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106141 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.106132049 +0000 UTC m=+35.886566317 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106181 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106207 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.106198751 +0000 UTC m=+35.886633029 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106267 3561 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106295 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.106286714 +0000 UTC m=+35.886720992 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106341 3561 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106366 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.106358826 +0000 UTC m=+35.886793094 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106410 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106434 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.106426838 +0000 UTC m=+35.886861116 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106478 3561 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106502 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.10649507 +0000 UTC m=+35.886929348 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106563 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106589 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.106581553 +0000 UTC m=+35.887015821 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106635 3561 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106663 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.106654855 +0000 UTC m=+35.887089133 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106706 3561 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106732 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.106723967 +0000 UTC m=+35.887158245 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106774 3561 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106801 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.106792069 +0000 UTC m=+35.887226347 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106839 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106864 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.106856881 +0000 UTC m=+35.887291149 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106903 3561 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106929 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.106921923 +0000 UTC m=+35.887356191 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106970 3561 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.106995 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.106987985 +0000 UTC m=+35.887422253 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.107036 3561 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.107060 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.107051887 +0000 UTC m=+35.887486155 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.107105 3561 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.107130 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.107122669 +0000 UTC m=+35.887556947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.204163 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.204664 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.204906 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.204406 3561 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.205181 3561 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.205193 3561 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.204799 3561 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.205244 3561 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.205254 3561 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.205288 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.205273824 +0000 UTC m=+35.985708082 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.204984 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.205311 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.205318 3561 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.205336 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.205330566 +0000 UTC m=+35.985764824 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.205360 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.205353277 +0000 UTC m=+35.985787535 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.310872 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.310996 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.311045 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.311849 3561 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.311867 3561 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.311880 3561 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.311929 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.311913637 +0000 UTC m=+36.092347895 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.311982 3561 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.311994 3561 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.312002 3561 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.312025 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.31201737 +0000 UTC m=+36.092451628 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.312070 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.312082 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.312089 3561 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.312114 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.312106193 +0000 UTC m=+36.092540461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.413184 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.413231 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.413299 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.413375 3561 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.413398 3561 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.413410 3561 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.413443 3561 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.413458 3561 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.413466 3561 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.413552 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.413591 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.413563337 +0000 UTC m=+36.193997605 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.413673 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.41365958 +0000 UTC m=+36.194093838 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.413617 3561 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.413718 3561 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.413728 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.413619 3561 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.413791 3561 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.413807 3561 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.413758 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.413751423 +0000 UTC m=+36.194185681 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.413909 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.413897728 +0000 UTC m=+36.194331996 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.414036 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.414152 3561 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.414174 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.414310 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.41429952 +0000 UTC m=+36.194733788 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.517386 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.517497 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.518494 3561 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.518524 3561 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.518555 3561 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.518606 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.518590742 +0000 UTC m=+36.299025010 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.518663 3561 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.518677 3561 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.518688 3561 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.518717 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.518708955 +0000 UTC m=+36.299143233 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.620618 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.620662 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.620692 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.620763 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.620786 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.620836 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.620873 3561 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.620915 3561 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.620933 3561 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621002 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.620980595 +0000 UTC m=+36.401414863 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621069 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621086 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621096 3561 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621172 3561 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621184 3561 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621191 3561 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621219 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.621206532 +0000 UTC m=+36.401640790 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621278 3561 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621293 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621325 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.621315325 +0000 UTC m=+36.401749593 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621349 3561 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621360 3561 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621366 3561 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621387 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.621381257 +0000 UTC m=+36.401815515 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621451 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.621444999 +0000 UTC m=+36.401879247 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621490 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621508 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621518 3561 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.621567 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.621552712 +0000 UTC m=+36.401986980 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664183 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664257 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664258 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664222 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664186 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664331 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664305 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664394 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664452 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.664464 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664507 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664515 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.664612 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664622 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664657 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664688 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.664725 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664736 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664822 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.664823 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664888 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664889 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.664946 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.664955 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.665000 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.665033 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.665034 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.665135 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.665213 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.665232 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.665301 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.665328 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.665377 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.665247 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.665436 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.665451 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.665518 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.665555 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.665583 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.665666 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.665772 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.665834 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.665905 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.665993 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.666101 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.666201 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.666306 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.666445 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.666605 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.666730 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.666783 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.666900 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.666958 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.667031 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.667102 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.667173 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.667210 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.667241 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.667294 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.667354 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.667444 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.667576 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.667593 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.667663 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.667737 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.667831 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.667914 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.667996 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.668070 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.668166 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.668274 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.668366 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.668464 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.668573 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.735515 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:09 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:09 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:09 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.735627 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.814265 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.814380 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.814476 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.814554 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.814590 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815042 3561 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815068 3561 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815079 3561 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815125 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815156 3561 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815167 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.815148731 +0000 UTC m=+36.595582989 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815171 3561 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815176 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815182 3561 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815183 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815238 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815254 3561 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815205 3561 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815217 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.815207872 +0000 UTC m=+36.595642130 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815362 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.815339646 +0000 UTC m=+36.595773904 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815379 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.815370107 +0000 UTC m=+36.595804365 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815458 3561 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815470 3561 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815478 3561 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.815507 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.815499041 +0000 UTC m=+36.595933299 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.917807 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.917860 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.917919 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:09 crc kubenswrapper[3561]: I1203 00:07:09.917943 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.918148 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.918162 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.918172 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.918212 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.918200124 +0000 UTC m=+36.698634382 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.918261 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.918270 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.918302 3561 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.918323 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.918316998 +0000 UTC m=+36.698751256 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.918372 3561 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.918382 3561 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.918389 3561 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.918412 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.91840335 +0000 UTC m=+36.698837608 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.918448 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.918458 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.918464 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:09 crc kubenswrapper[3561]: E1203 00:07:09.918484 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:17.918479303 +0000 UTC m=+36.698913561 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.020633 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.020755 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.020776 3561 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.020801 3561 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.020812 3561 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.021016 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.021062 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.021077 3561 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.021154 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:18.021132844 +0000 UTC m=+36.801567112 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.021313 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:18.021244767 +0000 UTC m=+36.801679075 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.032724 3561 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="24358ec15ff59003d6631148049412bc93b5a21207acd8315e0475f455558be3" exitCode=0 Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.032798 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"24358ec15ff59003d6631148049412bc93b5a21207acd8315e0475f455558be3"} Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.055057 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a"} Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.056216 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.056904 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.124676 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.124822 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.126823 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.126840 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.126850 3561 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.126892 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:18.126877629 +0000 UTC m=+36.907311887 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.126950 3561 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.126961 3561 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.126968 3561 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.126992 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:18.126984902 +0000 UTC m=+36.907419160 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.227555 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.227604 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.227787 3561 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.227803 3561 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.227813 3561 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.227867 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.227926 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.229697 3561 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.229721 3561 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.229730 3561 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.229728 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.229802 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.229813 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.229820 3561 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.229827 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.229843 3561 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.229762 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:18.229749598 +0000 UTC m=+37.010183856 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.229861 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:18.229854161 +0000 UTC m=+37.010288419 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.229872 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:18.229867102 +0000 UTC m=+37.010301360 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.229919 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:18.229898822 +0000 UTC m=+37.010333080 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.230064 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.249706 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.331862 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.331906 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.331954 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.332292 3561 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.332338 3561 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.332354 3561 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.332425 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:18.332405689 +0000 UTC m=+37.112839957 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.332292 3561 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.332460 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.332472 3561 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.332490 3561 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.332496 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:18.332486602 +0000 UTC m=+37.112920880 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.332501 3561 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.332572 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:18.332559064 +0000 UTC m=+37.112993322 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.664697 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.664720 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.664717 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.664761 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.664764 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.664798 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.664808 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.664834 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.664854 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.664861 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.665943 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.666201 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.666496 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.666656 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.666788 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.667132 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.667275 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.667432 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.667518 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:10 crc kubenswrapper[3561]: E1203 00:07:10.667615 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.736282 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:10 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:10 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:10 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:10 crc kubenswrapper[3561]: I1203 00:07:10.736851 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.062000 3561 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="328c875fdf8ec8d7581e81331c14a21d747471a385d7c1c973ec8c4563d4a0c4" exitCode=0 Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.062079 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"328c875fdf8ec8d7581e81331c14a21d747471a385d7c1c973ec8c4563d4a0c4"} Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.063612 3561 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.664901 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.664912 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665056 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665103 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665111 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665112 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665171 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665213 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665218 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665078 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665266 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665310 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665333 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665344 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665341 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665067 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665375 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665183 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665212 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665435 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665146 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665472 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665475 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.665378 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665612 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665628 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665648 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665700 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665719 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665742 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665774 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665788 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665822 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665825 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665855 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665894 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.665898 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.668877 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.669184 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.669239 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.669455 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.669565 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.669675 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.669773 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.669997 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.670081 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.670170 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.670267 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.670462 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.670615 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.670774 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.670917 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.671050 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.671165 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.671246 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.671328 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.671477 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.671484 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.671527 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.671595 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.671643 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.671685 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.671731 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.671734 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.671788 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.671904 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.672087 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.672156 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.672287 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.672448 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.672629 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.672793 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.672846 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.673007 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:11 crc kubenswrapper[3561]: E1203 00:07:11.678501 3561 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.736135 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:11 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:11 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:11 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:11 crc kubenswrapper[3561]: I1203 00:07:11.736243 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:12 crc kubenswrapper[3561]: I1203 00:07:12.071849 3561 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 00:07:12 crc kubenswrapper[3561]: I1203 00:07:12.071864 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"f1a4277865d9f81a87ee93740d548081037828d46bfea892eee6de97e974bb27"} Dec 03 00:07:12 crc kubenswrapper[3561]: I1203 00:07:12.664509 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:12 crc kubenswrapper[3561]: I1203 00:07:12.664614 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:12 crc kubenswrapper[3561]: I1203 00:07:12.664658 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:12 crc kubenswrapper[3561]: I1203 00:07:12.664745 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:12 crc kubenswrapper[3561]: I1203 00:07:12.664821 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:12 crc kubenswrapper[3561]: I1203 00:07:12.664841 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:12 crc kubenswrapper[3561]: E1203 00:07:12.665033 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:12 crc kubenswrapper[3561]: I1203 00:07:12.665063 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:12 crc kubenswrapper[3561]: E1203 00:07:12.665186 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:12 crc kubenswrapper[3561]: E1203 00:07:12.665348 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:12 crc kubenswrapper[3561]: I1203 00:07:12.665457 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:12 crc kubenswrapper[3561]: I1203 00:07:12.664574 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:12 crc kubenswrapper[3561]: E1203 00:07:12.665469 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:12 crc kubenswrapper[3561]: E1203 00:07:12.665746 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:12 crc kubenswrapper[3561]: E1203 00:07:12.665940 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:12 crc kubenswrapper[3561]: I1203 00:07:12.666042 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:12 crc kubenswrapper[3561]: E1203 00:07:12.666166 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:12 crc kubenswrapper[3561]: E1203 00:07:12.666352 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:12 crc kubenswrapper[3561]: E1203 00:07:12.667162 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:12 crc kubenswrapper[3561]: E1203 00:07:12.667247 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:12 crc kubenswrapper[3561]: I1203 00:07:12.732877 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:07:12 crc kubenswrapper[3561]: I1203 00:07:12.735818 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:12 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:12 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:12 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:12 crc kubenswrapper[3561]: I1203 00:07:12.735897 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664614 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664634 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664649 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664662 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664688 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664680 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664692 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664717 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664726 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664732 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664745 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664724 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664756 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664757 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664791 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664792 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664787 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664810 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664819 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664837 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664838 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664858 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664858 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664874 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664877 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664875 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664898 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664892 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664914 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664919 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664905 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664916 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664931 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664930 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664951 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.664968 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.665063 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.666713 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.666896 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.667081 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.667481 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.667816 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.667941 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.668224 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.668469 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.668658 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.668814 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.668976 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.669219 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.669302 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.669579 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.669780 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.669793 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.669924 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.670006 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.670363 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.670453 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.670629 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.670804 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.671024 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.671403 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.671465 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.671612 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.671742 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.672381 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.672442 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.672498 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.672612 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.672772 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.671864 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.671963 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.672082 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.672197 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:13 crc kubenswrapper[3561]: E1203 00:07:13.672267 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.735040 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:13 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:13 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:13 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:13 crc kubenswrapper[3561]: I1203 00:07:13.735130 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:14 crc kubenswrapper[3561]: I1203 00:07:14.664346 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:14 crc kubenswrapper[3561]: I1203 00:07:14.664714 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:14 crc kubenswrapper[3561]: I1203 00:07:14.664384 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:14 crc kubenswrapper[3561]: E1203 00:07:14.664949 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:14 crc kubenswrapper[3561]: I1203 00:07:14.664395 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:14 crc kubenswrapper[3561]: E1203 00:07:14.665052 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:14 crc kubenswrapper[3561]: I1203 00:07:14.664417 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:14 crc kubenswrapper[3561]: E1203 00:07:14.665130 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:14 crc kubenswrapper[3561]: I1203 00:07:14.664410 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:14 crc kubenswrapper[3561]: E1203 00:07:14.665218 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:14 crc kubenswrapper[3561]: I1203 00:07:14.664444 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:14 crc kubenswrapper[3561]: E1203 00:07:14.665319 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:14 crc kubenswrapper[3561]: I1203 00:07:14.664449 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:14 crc kubenswrapper[3561]: E1203 00:07:14.665406 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:14 crc kubenswrapper[3561]: E1203 00:07:14.665410 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:14 crc kubenswrapper[3561]: I1203 00:07:14.664471 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:14 crc kubenswrapper[3561]: I1203 00:07:14.664444 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:14 crc kubenswrapper[3561]: E1203 00:07:14.665514 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:14 crc kubenswrapper[3561]: E1203 00:07:14.665701 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:14 crc kubenswrapper[3561]: E1203 00:07:14.665797 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:14 crc kubenswrapper[3561]: I1203 00:07:14.734695 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:14 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:14 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:14 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:14 crc kubenswrapper[3561]: I1203 00:07:14.734758 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.664850 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.664854 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.664850 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.666158 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.664836 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.664913 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.664946 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.664965 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.664981 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665006 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665031 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665034 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665049 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665055 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665083 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665131 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665145 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665158 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665186 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665180 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665213 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665233 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665228 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665231 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665277 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665267 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665274 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665302 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665315 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665331 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665341 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665356 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665361 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665378 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665399 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.666436 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665414 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665421 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.665471 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.666774 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.666857 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.666971 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.667143 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.667257 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.667357 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.667523 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.667685 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.667817 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.667966 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.668212 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.668346 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.668504 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.668763 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.668993 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.669881 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.670104 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.670263 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.670386 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.671403 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.671648 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.671667 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.671903 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.672010 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.672190 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.672503 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.672773 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.672915 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.673242 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.673358 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.673389 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.673505 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.673702 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.673870 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:15 crc kubenswrapper[3561]: E1203 00:07:15.673982 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.736007 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:15 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:15 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:15 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:15 crc kubenswrapper[3561]: I1203 00:07:15.736247 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:16 crc kubenswrapper[3561]: I1203 00:07:16.663903 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:16 crc kubenswrapper[3561]: I1203 00:07:16.663945 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:16 crc kubenswrapper[3561]: I1203 00:07:16.663951 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:16 crc kubenswrapper[3561]: I1203 00:07:16.663903 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:16 crc kubenswrapper[3561]: E1203 00:07:16.664125 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:16 crc kubenswrapper[3561]: I1203 00:07:16.664171 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:16 crc kubenswrapper[3561]: I1203 00:07:16.664171 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:16 crc kubenswrapper[3561]: I1203 00:07:16.664194 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:16 crc kubenswrapper[3561]: I1203 00:07:16.664229 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:16 crc kubenswrapper[3561]: E1203 00:07:16.664296 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:16 crc kubenswrapper[3561]: I1203 00:07:16.664324 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:16 crc kubenswrapper[3561]: E1203 00:07:16.664413 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:16 crc kubenswrapper[3561]: E1203 00:07:16.664574 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:16 crc kubenswrapper[3561]: I1203 00:07:16.664628 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:16 crc kubenswrapper[3561]: E1203 00:07:16.664784 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:16 crc kubenswrapper[3561]: E1203 00:07:16.664886 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:16 crc kubenswrapper[3561]: E1203 00:07:16.665033 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:16 crc kubenswrapper[3561]: E1203 00:07:16.665073 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:16 crc kubenswrapper[3561]: E1203 00:07:16.665140 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:16 crc kubenswrapper[3561]: E1203 00:07:16.665203 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:16 crc kubenswrapper[3561]: E1203 00:07:16.679826 3561 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 00:07:16 crc kubenswrapper[3561]: I1203 00:07:16.754824 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:16 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:16 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:16 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:16 crc kubenswrapper[3561]: I1203 00:07:16.754897 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.061905 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.061980 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062009 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062061 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062090 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062109 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062136 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062170 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062209 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.06218476 +0000 UTC m=+51.842619018 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062217 3561 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062273 3561 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062300 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062318 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.062291163 +0000 UTC m=+51.842725581 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062222 3561 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062350 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.062330355 +0000 UTC m=+51.842764613 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062350 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062369 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.062360636 +0000 UTC m=+51.842795104 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062415 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062444 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062461 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.062441968 +0000 UTC m=+51.842876226 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062487 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062489 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.062478519 +0000 UTC m=+51.842912777 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062520 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062527 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062569 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.062561822 +0000 UTC m=+51.842996080 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062573 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062609 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062617 3561 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062658 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062663 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.062654285 +0000 UTC m=+51.843088643 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062688 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062708 3561 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062723 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062733 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.062725747 +0000 UTC m=+51.843160125 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062762 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062773 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062789 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.062782538 +0000 UTC m=+51.843216796 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062811 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062862 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062894 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.062886612 +0000 UTC m=+51.843320980 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062898 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062930 3561 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062941 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.062926383 +0000 UTC m=+51.843360641 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062976 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062986 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.062961594 +0000 UTC m=+51.843395852 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.062934 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063003 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.062813 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063003 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.062996295 +0000 UTC m=+51.843430553 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063036 3561 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.063058 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063062 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063056387 +0000 UTC m=+51.843490645 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063089 3561 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063098 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063085198 +0000 UTC m=+51.843519646 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063111 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063105168 +0000 UTC m=+51.843539426 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.063134 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063160 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063158 3561 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.063163 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063189 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063182111 +0000 UTC m=+51.843616359 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063193 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063224 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063198781 +0000 UTC m=+51.843633039 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063240 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.063254 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063267 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063259483 +0000 UTC m=+51.843693981 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.063296 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063315 3561 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.063340 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063341 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063335085 +0000 UTC m=+51.843769343 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.063374 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.063410 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063426 3561 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.063451 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063483 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063467389 +0000 UTC m=+51.843901647 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063500 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.063514 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063530 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063522741 +0000 UTC m=+51.843956999 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063578 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063587 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063577973 +0000 UTC m=+51.844012221 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063607 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063596783 +0000 UTC m=+51.844031261 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.063635 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063656 3561 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.063670 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063701 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063676716 +0000 UTC m=+51.844110974 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063714 3561 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.063725 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063741 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063734468 +0000 UTC m=+51.844168926 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063790 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.063808 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063813 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.06380765 +0000 UTC m=+51.844241908 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063839 3561 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063867 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063842 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063832511 +0000 UTC m=+51.844266969 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063882 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063913 3561 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063931 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063921153 +0000 UTC m=+51.844355411 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.063917 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063947 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063938964 +0000 UTC m=+51.844373212 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063969 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063961085 +0000 UTC m=+51.844395343 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063970 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063984 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063977755 +0000 UTC m=+51.844412013 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.063969 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.063999 3561 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.064002 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.063994786 +0000 UTC m=+51.844429044 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.064039 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.064070 3561 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.064082 3561 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.064106 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.064098409 +0000 UTC m=+51.844532667 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.064072 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.064128 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.064130 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.064118249 +0000 UTC m=+51.844552657 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.064142 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.064153 3561 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.064184 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.064177581 +0000 UTC m=+51.844611839 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.064208 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.064216 3561 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.064274 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.064263323 +0000 UTC m=+51.844697801 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.064303 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.064326 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.064319965 +0000 UTC m=+51.844754223 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.064500 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.064558 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.064706 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.064613 3561 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.064651 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.064857 3561 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.064898 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.064889792 +0000 UTC m=+51.845324050 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.065009 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.064993005 +0000 UTC m=+51.845427453 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.065031 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.065023196 +0000 UTC m=+51.845457654 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.065154 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.065200 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.065233 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.065275 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.065303 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.070742 3561 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.070764 3561 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.070781 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.070814 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.070834 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.070796671 +0000 UTC m=+51.851230929 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.070864 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.070851113 +0000 UTC m=+51.851285371 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.070903 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.070882004 +0000 UTC m=+51.851316262 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.070976 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.071111 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.07109535 +0000 UTC m=+51.851529608 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.074284 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.074249266 +0000 UTC m=+51.854683524 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.167386 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.167428 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.167460 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.167482 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.167528 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.167571 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.167602 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.167629 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.167678 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.167723 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.167747 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.167768 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.167790 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.167852 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.167888 3561 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.167926 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.167906054 +0000 UTC m=+51.948340352 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.167941 3561 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.167947 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.167936845 +0000 UTC m=+51.948371183 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.167967 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.167959166 +0000 UTC m=+51.948393434 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.167997 3561 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168024 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.168011748 +0000 UTC m=+51.948446006 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168034 3561 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168056 3561 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168070 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.168061139 +0000 UTC m=+51.948495487 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168087 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.16807883 +0000 UTC m=+51.948513168 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168093 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168113 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.168107191 +0000 UTC m=+51.948541449 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168131 3561 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168144 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168156 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.168149152 +0000 UTC m=+51.948583490 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168168 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.168162932 +0000 UTC m=+51.948597190 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168192 3561 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168209 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168216 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.168208964 +0000 UTC m=+51.948643312 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.167865 3561 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168234 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168248 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.168241535 +0000 UTC m=+51.948675893 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168272 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168292 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.168286406 +0000 UTC m=+51.948720664 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168290 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168324 3561 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168339 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168351 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.168342788 +0000 UTC m=+51.948777126 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168374 3561 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168382 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168393 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.168387679 +0000 UTC m=+51.948821937 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168419 3561 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168439 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.168433311 +0000 UTC m=+51.948867569 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168421 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168454 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168468 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168484 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.168475202 +0000 UTC m=+51.948909530 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168505 3561 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168513 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168523 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.168518153 +0000 UTC m=+51.948952411 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168563 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168573 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168598 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.168591155 +0000 UTC m=+51.949025413 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168620 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168639 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168663 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168673 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.168663818 +0000 UTC m=+51.949098156 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168700 3561 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168740 3561 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168761 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.16875513 +0000 UTC m=+51.949189378 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168742 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168775 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168828 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168861 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168864 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168889 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168897 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.168888705 +0000 UTC m=+51.949323033 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168921 3561 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168927 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168941 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.168935866 +0000 UTC m=+51.949370124 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168961 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.168982 3561 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.168993 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169017 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169039 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169042 3561 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169056 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.169046339 +0000 UTC m=+51.949480687 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169083 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169094 3561 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169113 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.169107151 +0000 UTC m=+51.949541409 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169115 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169143 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169147 3561 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169166 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169177 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.169168703 +0000 UTC m=+51.949603041 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169200 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169219 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169223 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.169217695 +0000 UTC m=+51.949651943 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169249 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169295 3561 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169301 3561 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169333 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169357 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169374 3561 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169249 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.169241555 +0000 UTC m=+51.949675903 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169412 3561 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169440 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.169431011 +0000 UTC m=+51.949865339 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169443 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169478 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169490 3561 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169503 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169522 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.169512334 +0000 UTC m=+51.949946652 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169571 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169617 3561 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169651 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.169642128 +0000 UTC m=+51.950076416 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169618 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169659 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169701 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169714 3561 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169733 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169742 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.16973377 +0000 UTC m=+51.950168068 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169782 3561 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169797 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169813 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.169804783 +0000 UTC m=+51.950239041 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169840 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169872 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169896 3561 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169905 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.169935 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169937 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.169925966 +0000 UTC m=+51.950360324 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169968 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.169957907 +0000 UTC m=+51.950392255 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169980 3561 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170016 3561 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170048 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170069 3561 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170120 3561 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170129 3561 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169843 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169572 3561 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.169984 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.169976398 +0000 UTC m=+51.950410736 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170190 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.170178594 +0000 UTC m=+51.950612932 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170214 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.170205925 +0000 UTC m=+51.950640253 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170235 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.170226336 +0000 UTC m=+51.950660694 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170256 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.170248606 +0000 UTC m=+51.950682944 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170278 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.170269787 +0000 UTC m=+51.950704135 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.170333 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.170382 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170408 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.170389191 +0000 UTC m=+51.950823489 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170431 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.170421992 +0000 UTC m=+51.950856330 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170444 3561 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170447 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.170440562 +0000 UTC m=+51.950874910 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170468 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.170459633 +0000 UTC m=+51.950893991 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170485 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.170476603 +0000 UTC m=+51.950910981 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170500 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.170493274 +0000 UTC m=+51.950927612 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170516 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.170508274 +0000 UTC m=+51.950942622 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170530 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.170522165 +0000 UTC m=+51.950956503 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170568 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.170559466 +0000 UTC m=+51.950993814 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170711 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.170697609 +0000 UTC m=+51.951131917 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170720 3561 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170823 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.170801142 +0000 UTC m=+51.951235400 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.170977 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.170961527 +0000 UTC m=+51.951395785 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.273248 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.273305 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.273385 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.273446 3561 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.273475 3561 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.273489 3561 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.273555 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.273522797 +0000 UTC m=+52.053957135 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.273654 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.273674 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.273687 3561 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.273740 3561 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.273813 3561 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.273831 3561 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.273891 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.273819946 +0000 UTC m=+52.054254204 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.274120 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.274079014 +0000 UTC m=+52.054513272 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.376760 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.376881 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.376926 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.376960 3561 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.377000 3561 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.377017 3561 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.377098 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.377074646 +0000 UTC m=+52.157508914 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.377208 3561 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.377230 3561 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.377241 3561 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.377296 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.377277052 +0000 UTC m=+52.157711490 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.377324 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.377345 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.377359 3561 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.377399 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.377388005 +0000 UTC m=+52.157822283 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.479602 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.479660 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.479758 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.479876 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.479898 3561 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.479935 3561 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.479939 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.479951 3561 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.480016 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.479996265 +0000 UTC m=+52.260430533 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.480096 3561 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.480113 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.480156 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.480140979 +0000 UTC m=+52.260575307 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.480206 3561 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.480221 3561 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.480230 3561 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.480263 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.480252633 +0000 UTC m=+52.260686911 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.480301 3561 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.480315 3561 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.480324 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.480354 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.480345296 +0000 UTC m=+52.260779654 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.480399 3561 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.480412 3561 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.480422 3561 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.480553 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.480527571 +0000 UTC m=+52.260961849 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.584212 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.584362 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.584426 3561 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.584471 3561 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.584487 3561 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.584577 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.584551855 +0000 UTC m=+52.364986113 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.584719 3561 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.584765 3561 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.584781 3561 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.584950 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.584915976 +0000 UTC m=+52.365350414 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.664379 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.664436 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.664435 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.664467 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.664584 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.664596 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.664606 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.664810 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.664891 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.664905 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.664471 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.664935 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.664436 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.664967 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.664907 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.664479 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.665058 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.665060 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.665057 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.665099 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.665151 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.665234 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.665272 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.665313 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.665328 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.665358 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.665401 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.665403 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.665466 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.665533 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.665592 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.665596 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.665654 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.665680 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.665727 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.665788 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.665816 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.665857 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.665908 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.665931 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.665968 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.666011 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.666036 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.666074 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.666117 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.666150 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.666197 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.666254 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.666308 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.666345 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.666392 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.666429 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.666491 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.666521 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.666605 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.666688 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.666741 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.666773 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.666832 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.666859 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.666922 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.667134 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.667212 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.667281 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.667347 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.667391 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.667439 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.667484 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.667573 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.667631 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.667692 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.667756 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.667808 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.667857 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.686635 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.686739 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.686779 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.686801 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.686821 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.686833 3561 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.686849 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.686876 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.686911 3561 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.686903 3561 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.686934 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.686949 3561 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.686953 3561 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.686964 3561 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.686966 3561 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.686976 3561 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.687009 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.68699381 +0000 UTC m=+52.467428068 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.686920 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.687024 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.68701714 +0000 UTC m=+52.467451398 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.687051 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.687062 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.687070 3561 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.687098 3561 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.687098 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.687089682 +0000 UTC m=+52.467523940 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.687111 3561 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.687123 3561 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.687123 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.687116103 +0000 UTC m=+52.467550361 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.687263 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.687252417 +0000 UTC m=+52.467686675 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.687282 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.687276178 +0000 UTC m=+52.467710436 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.734632 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:17 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:17 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:17 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.734762 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.911157 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.911212 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.911232 3561 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.911314 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.911292659 +0000 UTC m=+52.691726937 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.910985 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.911600 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.911755 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.911799 3561 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.911847 3561 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.911886 3561 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.911889 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.911915 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.911930 3561 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.911962 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.911942118 +0000 UTC m=+52.692376386 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.911992 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.91198093 +0000 UTC m=+52.692415198 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.912071 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.912457 3561 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.912495 3561 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.912508 3561 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.912586 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.912566087 +0000 UTC m=+52.693000425 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: I1203 00:07:17.913835 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.913934 3561 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.913953 3561 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.913963 3561 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:17 crc kubenswrapper[3561]: E1203 00:07:17.914005 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:07:33.91399451 +0000 UTC m=+52.694428808 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.015608 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.015650 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.015727 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.015752 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.015819 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.015861 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.015875 3561 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.015872 3561 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.015901 3561 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.015912 3561 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.015941 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:34.01592191 +0000 UTC m=+52.796356248 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.016007 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.016022 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.016033 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.016071 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:34.016059884 +0000 UTC m=+52.796494132 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.016094 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.016172 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.016199 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.016203 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:34.016187208 +0000 UTC m=+52.796621496 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.016308 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:34.01628116 +0000 UTC m=+52.796715458 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.117970 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.118036 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.118175 3561 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.118209 3561 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.118208 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.118222 3561 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.118231 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.118240 3561 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.118290 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:34.118270302 +0000 UTC m=+52.898704570 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.118427 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:34.118414187 +0000 UTC m=+52.898848515 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.220846 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.221005 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.222257 3561 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.222286 3561 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.222298 3561 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.222347 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:34.222330967 +0000 UTC m=+53.002765235 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.222414 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.222427 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.222436 3561 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.222464 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-12-03 00:07:34.222455261 +0000 UTC m=+53.002889519 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.323596 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.323668 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.323746 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.323823 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.324100 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.324123 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.324137 3561 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.324222 3561 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.324260 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:34.324224135 +0000 UTC m=+53.104658393 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.324232 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.324289 3561 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.324293 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.324303 3561 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.324311 3561 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.324314 3561 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.324268 3561 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.324359 3561 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.324338 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-03 00:07:34.324328028 +0000 UTC m=+53.104762286 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.324405 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:34.32439554 +0000 UTC m=+53.104829798 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.324590 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:34.324562385 +0000 UTC m=+53.104996683 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.427320 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.427374 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.427465 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.427489 3561 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.427513 3561 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.427526 3561 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.427593 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:34.427576318 +0000 UTC m=+53.208010576 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.427616 3561 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.427634 3561 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.427644 3561 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.427716 3561 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.427760 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.427759 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:34.427743382 +0000 UTC m=+53.208177640 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.427842 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:34.427819535 +0000 UTC m=+53.208253803 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.666362 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.666635 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.666744 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.666892 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.666958 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.667086 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.667154 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.667357 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.667456 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.667653 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.667710 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.667778 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.667717 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.667827 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.667886 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.667723 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.668103 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.668205 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.668279 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:18 crc kubenswrapper[3561]: E1203 00:07:18.668329 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.743993 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:18 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:18 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:18 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:18 crc kubenswrapper[3561]: I1203 00:07:18.744078 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.663722 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.663778 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.663792 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.663868 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.663872 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.663942 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.663903 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.663972 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664060 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.663948 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.664117 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664151 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664177 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.663902 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664214 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.663742 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.664015 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664076 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664160 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.664392 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664466 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664480 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.664554 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664588 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664605 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664633 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.664704 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664701 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664739 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664752 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.664776 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664783 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664835 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664850 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664866 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.664898 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.664970 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.664975 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.665027 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.665075 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.665108 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.665160 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.665254 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.665344 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.665399 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.665462 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.665519 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.665579 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.665674 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.665724 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.665795 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.665865 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.665939 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.666014 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.666080 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.666146 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.666201 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.666266 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.666323 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.666402 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.666458 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.666528 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.666574 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.666637 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.666678 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.666722 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.666787 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.666835 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.666895 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.666945 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.666993 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.667036 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.667081 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:19 crc kubenswrapper[3561]: E1203 00:07:19.667132 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.735594 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:19 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:19 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:19 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:19 crc kubenswrapper[3561]: I1203 00:07:19.735684 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:20 crc kubenswrapper[3561]: I1203 00:07:20.108666 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" event={"ID":"2b6d14a5-ca00-40c7-af7a-051a98a24eed","Type":"ContainerStarted","Data":"a2ef5c460692e907086ee39d4da7f9f2f8e6b7a9d62758ca37b0417bd82cc057"} Dec 03 00:07:20 crc kubenswrapper[3561]: I1203 00:07:20.663398 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:20 crc kubenswrapper[3561]: I1203 00:07:20.663706 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:20 crc kubenswrapper[3561]: I1203 00:07:20.663726 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:20 crc kubenswrapper[3561]: I1203 00:07:20.663780 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:20 crc kubenswrapper[3561]: E1203 00:07:20.663968 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:20 crc kubenswrapper[3561]: I1203 00:07:20.664075 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:20 crc kubenswrapper[3561]: I1203 00:07:20.664202 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:20 crc kubenswrapper[3561]: E1203 00:07:20.664323 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:20 crc kubenswrapper[3561]: I1203 00:07:20.664418 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:20 crc kubenswrapper[3561]: I1203 00:07:20.664524 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:20 crc kubenswrapper[3561]: I1203 00:07:20.664601 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:20 crc kubenswrapper[3561]: E1203 00:07:20.664713 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:20 crc kubenswrapper[3561]: E1203 00:07:20.664844 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:20 crc kubenswrapper[3561]: I1203 00:07:20.664955 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:20 crc kubenswrapper[3561]: E1203 00:07:20.665058 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:20 crc kubenswrapper[3561]: E1203 00:07:20.665201 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:20 crc kubenswrapper[3561]: E1203 00:07:20.665341 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:20 crc kubenswrapper[3561]: E1203 00:07:20.665461 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:20 crc kubenswrapper[3561]: E1203 00:07:20.665595 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:20 crc kubenswrapper[3561]: E1203 00:07:20.665727 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:20 crc kubenswrapper[3561]: I1203 00:07:20.735029 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:20 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:20 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:20 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:20 crc kubenswrapper[3561]: I1203 00:07:20.735408 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.666698 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.675818 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.675963 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.676066 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.676241 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.676284 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.676337 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.676389 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.676393 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.676533 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.676589 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.676622 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.676692 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.676765 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.676831 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.676871 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.676792 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.676934 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.677037 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.677057 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.677109 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.677128 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.677236 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.677325 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.677361 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.677448 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.677483 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.677561 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.677617 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.677661 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.677663 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.677744 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.677724 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.677845 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.677907 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.678019 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.678026 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.678238 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.678363 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.678428 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.678505 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.678655 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.678674 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.678707 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.678713 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.678681 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.678678 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.678790 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.678828 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.678931 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.679097 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.679116 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.679309 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.679312 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.679366 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.679341 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.679469 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.679492 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.681783 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.681825 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.681853 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.681933 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.681986 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.681996 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.682149 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.682322 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.682415 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.682520 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.682642 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.682747 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.682790 3561 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.682888 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.683005 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.683092 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:21 crc kubenswrapper[3561]: E1203 00:07:21.683192 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.734272 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:21 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:21 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:21 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:21 crc kubenswrapper[3561]: I1203 00:07:21.734397 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:22 crc kubenswrapper[3561]: I1203 00:07:22.664571 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:22 crc kubenswrapper[3561]: I1203 00:07:22.664640 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:22 crc kubenswrapper[3561]: I1203 00:07:22.664640 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:22 crc kubenswrapper[3561]: I1203 00:07:22.664692 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:22 crc kubenswrapper[3561]: I1203 00:07:22.664726 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:22 crc kubenswrapper[3561]: I1203 00:07:22.664747 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:22 crc kubenswrapper[3561]: I1203 00:07:22.664796 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:22 crc kubenswrapper[3561]: I1203 00:07:22.664796 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:22 crc kubenswrapper[3561]: I1203 00:07:22.664909 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:22 crc kubenswrapper[3561]: I1203 00:07:22.664923 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:22 crc kubenswrapper[3561]: E1203 00:07:22.665950 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:22 crc kubenswrapper[3561]: E1203 00:07:22.666133 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:22 crc kubenswrapper[3561]: E1203 00:07:22.666268 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:22 crc kubenswrapper[3561]: E1203 00:07:22.666507 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:22 crc kubenswrapper[3561]: E1203 00:07:22.666627 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:22 crc kubenswrapper[3561]: E1203 00:07:22.666731 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:22 crc kubenswrapper[3561]: E1203 00:07:22.666952 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:22 crc kubenswrapper[3561]: E1203 00:07:22.667137 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:22 crc kubenswrapper[3561]: E1203 00:07:22.667375 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:22 crc kubenswrapper[3561]: E1203 00:07:22.667713 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:22 crc kubenswrapper[3561]: I1203 00:07:22.736826 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:22 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:22 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:22 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:22 crc kubenswrapper[3561]: I1203 00:07:22.736924 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.663996 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.664050 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.664120 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.664164 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.664213 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.664262 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.664329 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.664335 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.664269 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.664010 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.664601 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.664620 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.664633 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.664717 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.664726 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.664753 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.664768 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.664827 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.664878 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.664893 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.665070 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.665074 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.665106 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.665168 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.665180 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.665234 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.665422 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.665472 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.665514 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.665597 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.665601 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.665642 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.665608 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.665769 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.665787 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.665804 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.665956 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.665981 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.666052 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.666060 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.666142 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.666174 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.666281 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.666437 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.666499 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.666576 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.666584 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.666603 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.666655 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.666696 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.666765 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.666838 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.666870 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.666975 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.667027 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.667099 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.667179 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.667215 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.667278 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.667345 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.667395 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.667466 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.667494 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.667589 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.667665 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.667724 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.667820 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.667869 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.667921 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.667969 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.668747 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.668827 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.668890 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:23 crc kubenswrapper[3561]: E1203 00:07:23.669196 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.735327 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:23 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:23 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:23 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:23 crc kubenswrapper[3561]: I1203 00:07:23.735456 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:24 crc kubenswrapper[3561]: I1203 00:07:24.663826 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:24 crc kubenswrapper[3561]: I1203 00:07:24.664277 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:24 crc kubenswrapper[3561]: I1203 00:07:24.663827 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:24 crc kubenswrapper[3561]: I1203 00:07:24.663870 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:24 crc kubenswrapper[3561]: I1203 00:07:24.663934 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:24 crc kubenswrapper[3561]: I1203 00:07:24.663988 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:24 crc kubenswrapper[3561]: I1203 00:07:24.664032 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:24 crc kubenswrapper[3561]: I1203 00:07:24.664084 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:24 crc kubenswrapper[3561]: I1203 00:07:24.664140 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:24 crc kubenswrapper[3561]: I1203 00:07:24.664185 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:24 crc kubenswrapper[3561]: E1203 00:07:24.665227 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:24 crc kubenswrapper[3561]: E1203 00:07:24.665444 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:24 crc kubenswrapper[3561]: E1203 00:07:24.665734 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:24 crc kubenswrapper[3561]: E1203 00:07:24.665929 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:24 crc kubenswrapper[3561]: E1203 00:07:24.666159 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:24 crc kubenswrapper[3561]: E1203 00:07:24.667215 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:24 crc kubenswrapper[3561]: E1203 00:07:24.667348 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:24 crc kubenswrapper[3561]: E1203 00:07:24.667534 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:24 crc kubenswrapper[3561]: E1203 00:07:24.667798 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:24 crc kubenswrapper[3561]: E1203 00:07:24.667936 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:24 crc kubenswrapper[3561]: I1203 00:07:24.736175 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:24 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:24 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:24 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:24 crc kubenswrapper[3561]: I1203 00:07:24.736317 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.664051 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.664104 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.664159 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.664132 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.664246 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.664302 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.664329 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.664328 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.664421 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.664353 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.664506 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.664513 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.664310 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.664596 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.664683 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.664794 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.664804 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.665004 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.665043 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.665048 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.665014 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.665115 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.665234 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.665248 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.665395 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.665410 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.665535 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.665727 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.665789 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.665829 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.665973 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.666083 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.666203 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.666359 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.666489 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.666505 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.666685 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.666783 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.666801 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.666825 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.667275 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.666841 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.666858 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.666903 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.666909 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.666950 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.667001 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.667171 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.667216 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.668358 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.668589 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.668690 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.668802 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.668868 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.669258 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.669630 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.669814 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.669934 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.670024 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.670280 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.670492 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.670716 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.670786 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.670922 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.671009 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.671124 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.671205 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.671296 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.671380 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.671453 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.671593 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.671662 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.671761 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:25 crc kubenswrapper[3561]: E1203 00:07:25.671778 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.735028 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:25 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:25 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:25 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:25 crc kubenswrapper[3561]: I1203 00:07:25.735184 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:26 crc kubenswrapper[3561]: I1203 00:07:26.663944 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:26 crc kubenswrapper[3561]: I1203 00:07:26.664024 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:26 crc kubenswrapper[3561]: I1203 00:07:26.664061 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:26 crc kubenswrapper[3561]: I1203 00:07:26.664144 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:26 crc kubenswrapper[3561]: E1203 00:07:26.664159 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:26 crc kubenswrapper[3561]: E1203 00:07:26.664294 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:26 crc kubenswrapper[3561]: I1203 00:07:26.664315 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:26 crc kubenswrapper[3561]: I1203 00:07:26.664374 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:26 crc kubenswrapper[3561]: E1203 00:07:26.664425 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:26 crc kubenswrapper[3561]: I1203 00:07:26.664453 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:26 crc kubenswrapper[3561]: E1203 00:07:26.664516 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:26 crc kubenswrapper[3561]: E1203 00:07:26.664656 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:26 crc kubenswrapper[3561]: E1203 00:07:26.664746 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:26 crc kubenswrapper[3561]: I1203 00:07:26.664781 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:26 crc kubenswrapper[3561]: I1203 00:07:26.664799 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:26 crc kubenswrapper[3561]: I1203 00:07:26.664794 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:26 crc kubenswrapper[3561]: E1203 00:07:26.664861 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:26 crc kubenswrapper[3561]: E1203 00:07:26.665101 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:26 crc kubenswrapper[3561]: E1203 00:07:26.665133 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:26 crc kubenswrapper[3561]: E1203 00:07:26.665205 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:26 crc kubenswrapper[3561]: E1203 00:07:26.684400 3561 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 00:07:26 crc kubenswrapper[3561]: I1203 00:07:26.736303 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:26 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:26 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:26 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:26 crc kubenswrapper[3561]: I1203 00:07:26.736396 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.664172 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.664236 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.664300 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.664249 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.664363 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.664378 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.664473 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.664483 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.664508 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.664578 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.664683 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.664709 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.664749 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.664868 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.664922 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.664958 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.665003 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.664983 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.665067 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.664982 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.665047 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.665201 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.665242 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.665281 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.665313 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.665328 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.665285 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.665317 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.665381 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.665521 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.665578 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.665615 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.665635 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.665662 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.665816 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.665853 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.665986 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.666129 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.666254 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.666367 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.666484 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.666646 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.666762 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.666823 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.666989 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.667106 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.667200 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.667307 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.667419 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.667466 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.667619 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.667722 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.667782 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.667885 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.667987 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.668006 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.668090 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.668156 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.668306 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.668481 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.668746 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.668764 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.668870 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.668952 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.669039 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.669045 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.669175 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.669277 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.669382 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.669527 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.669663 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.669785 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.670067 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:27 crc kubenswrapper[3561]: E1203 00:07:27.671198 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.735399 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:27 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:27 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:27 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:27 crc kubenswrapper[3561]: I1203 00:07:27.735457 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:28 crc kubenswrapper[3561]: I1203 00:07:28.663702 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:28 crc kubenswrapper[3561]: I1203 00:07:28.663787 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:28 crc kubenswrapper[3561]: I1203 00:07:28.663825 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:28 crc kubenswrapper[3561]: I1203 00:07:28.663856 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:28 crc kubenswrapper[3561]: I1203 00:07:28.663795 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:28 crc kubenswrapper[3561]: I1203 00:07:28.663734 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:28 crc kubenswrapper[3561]: I1203 00:07:28.663810 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:28 crc kubenswrapper[3561]: I1203 00:07:28.663729 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:28 crc kubenswrapper[3561]: I1203 00:07:28.663897 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:28 crc kubenswrapper[3561]: E1203 00:07:28.664181 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:28 crc kubenswrapper[3561]: E1203 00:07:28.664423 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:28 crc kubenswrapper[3561]: I1203 00:07:28.664581 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:28 crc kubenswrapper[3561]: E1203 00:07:28.664610 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:28 crc kubenswrapper[3561]: E1203 00:07:28.664818 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:28 crc kubenswrapper[3561]: E1203 00:07:28.665317 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:28 crc kubenswrapper[3561]: E1203 00:07:28.665409 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:28 crc kubenswrapper[3561]: E1203 00:07:28.665442 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:28 crc kubenswrapper[3561]: E1203 00:07:28.665619 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:28 crc kubenswrapper[3561]: E1203 00:07:28.665678 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:28 crc kubenswrapper[3561]: E1203 00:07:28.665811 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:28 crc kubenswrapper[3561]: I1203 00:07:28.736180 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:28 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:28 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:28 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:28 crc kubenswrapper[3561]: I1203 00:07:28.736298 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:28 crc kubenswrapper[3561]: I1203 00:07:28.980667 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:07:28 crc kubenswrapper[3561]: I1203 00:07:28.981854 3561 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.063390 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" probeResult="failure" output="" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.126036 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" probeResult="failure" output="" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.664020 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.664303 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.664400 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.664529 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.664653 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.664776 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.664813 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.664874 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.664969 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.664982 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.665037 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.665134 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.665079 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.665087 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.665091 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.664054 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.665283 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.665356 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.665418 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.665456 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.665475 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.665584 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.665681 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.665784 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.665796 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.665816 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.665833 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.665778 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.666005 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.666064 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.666591 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.666676 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.666811 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.666900 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.667004 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.667069 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.667175 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.667321 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.667377 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.667501 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.667584 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.667679 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.667915 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.667962 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.668074 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.668159 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.668235 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.668287 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.668359 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.668385 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.668502 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.668630 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.668822 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.668872 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.669046 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.669156 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.669245 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.669371 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.669591 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.669667 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.670001 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.670002 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.670091 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.670300 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.670592 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.670780 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.670899 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.671005 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.671128 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.671300 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.671449 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.671879 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.672098 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:29 crc kubenswrapper[3561]: E1203 00:07:29.673108 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.736233 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:29 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:29 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:29 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:29 crc kubenswrapper[3561]: I1203 00:07:29.736384 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:30 crc kubenswrapper[3561]: I1203 00:07:30.664205 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:30 crc kubenswrapper[3561]: I1203 00:07:30.664221 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:30 crc kubenswrapper[3561]: I1203 00:07:30.664373 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:30 crc kubenswrapper[3561]: I1203 00:07:30.664412 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:30 crc kubenswrapper[3561]: E1203 00:07:30.664517 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:30 crc kubenswrapper[3561]: I1203 00:07:30.664401 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:30 crc kubenswrapper[3561]: E1203 00:07:30.664799 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:30 crc kubenswrapper[3561]: E1203 00:07:30.665091 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:30 crc kubenswrapper[3561]: E1203 00:07:30.665304 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:30 crc kubenswrapper[3561]: I1203 00:07:30.665406 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:30 crc kubenswrapper[3561]: E1203 00:07:30.665658 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:30 crc kubenswrapper[3561]: I1203 00:07:30.665777 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:30 crc kubenswrapper[3561]: I1203 00:07:30.665796 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:30 crc kubenswrapper[3561]: I1203 00:07:30.665801 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:30 crc kubenswrapper[3561]: E1203 00:07:30.665975 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:30 crc kubenswrapper[3561]: I1203 00:07:30.666058 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:30 crc kubenswrapper[3561]: E1203 00:07:30.666198 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:30 crc kubenswrapper[3561]: E1203 00:07:30.666404 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:30 crc kubenswrapper[3561]: E1203 00:07:30.666615 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:30 crc kubenswrapper[3561]: E1203 00:07:30.666843 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:30 crc kubenswrapper[3561]: I1203 00:07:30.734888 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:30 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:30 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:30 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:30 crc kubenswrapper[3561]: I1203 00:07:30.735000 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.663433 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.663466 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.663633 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.663772 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.663807 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.663806 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.663777 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.663847 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.663875 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.663914 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.663944 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.663954 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.663974 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.663947 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.664010 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.663887 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.664017 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.664066 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.663775 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.663920 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.664176 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.664199 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.664264 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.664268 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.664301 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.664264 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.664369 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.664417 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.664435 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.664457 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.664473 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.664583 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.664650 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.664950 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.665098 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.665168 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.665200 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.665376 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.665617 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.665630 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.665655 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.670098 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.670235 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.670241 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.670279 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.670642 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.670833 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.670919 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.670962 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.671388 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.671404 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.671518 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.671528 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.671634 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.671673 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.672953 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.672971 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.677688 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.677842 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.678306 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.678523 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.679040 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.679261 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.679674 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.680010 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.680207 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.680469 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.680960 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.680757 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.681155 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.681247 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.681398 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.681499 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.681525 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:31 crc kubenswrapper[3561]: E1203 00:07:31.685672 3561 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.737516 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:31 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:31 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:31 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:31 crc kubenswrapper[3561]: I1203 00:07:31.737613 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:32 crc kubenswrapper[3561]: I1203 00:07:32.664088 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:32 crc kubenswrapper[3561]: I1203 00:07:32.664223 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:32 crc kubenswrapper[3561]: E1203 00:07:32.664388 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:32 crc kubenswrapper[3561]: I1203 00:07:32.664391 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:32 crc kubenswrapper[3561]: I1203 00:07:32.664450 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:32 crc kubenswrapper[3561]: I1203 00:07:32.664693 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:32 crc kubenswrapper[3561]: I1203 00:07:32.664819 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:32 crc kubenswrapper[3561]: E1203 00:07:32.664715 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:32 crc kubenswrapper[3561]: E1203 00:07:32.664997 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:32 crc kubenswrapper[3561]: E1203 00:07:32.665157 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:32 crc kubenswrapper[3561]: I1203 00:07:32.665226 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:32 crc kubenswrapper[3561]: E1203 00:07:32.665360 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:32 crc kubenswrapper[3561]: I1203 00:07:32.665416 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:32 crc kubenswrapper[3561]: I1203 00:07:32.665506 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:32 crc kubenswrapper[3561]: E1203 00:07:32.665656 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:32 crc kubenswrapper[3561]: I1203 00:07:32.665705 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:32 crc kubenswrapper[3561]: E1203 00:07:32.665972 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:32 crc kubenswrapper[3561]: E1203 00:07:32.666121 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:32 crc kubenswrapper[3561]: E1203 00:07:32.666330 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:32 crc kubenswrapper[3561]: E1203 00:07:32.666666 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:32 crc kubenswrapper[3561]: I1203 00:07:32.736152 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:32 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:32 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:32 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:32 crc kubenswrapper[3561]: I1203 00:07:32.736258 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.113166 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.113253 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.113331 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.113399 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.11338177 +0000 UTC m=+83.893816028 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.113449 3561 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.113341 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.113527 3561 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.113532 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.113505874 +0000 UTC m=+83.893940172 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.113716 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.113672709 +0000 UTC m=+83.894107007 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.113791 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.113849 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.113905 3561 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.113950 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.113938837 +0000 UTC m=+83.894373185 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.113950 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.114003 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114055 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114106 3561 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114125 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.114109933 +0000 UTC m=+83.894544231 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.114171 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114185 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.114160493 +0000 UTC m=+83.894594791 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114252 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.114279 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114308 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.114291467 +0000 UTC m=+83.894725765 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.114355 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114373 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.114358969 +0000 UTC m=+83.894793297 (durationBeforeRetry 32s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114377 3561 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.114410 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114424 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114448 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.114427151 +0000 UTC m=+83.894861449 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114472 3561 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114474 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.114461662 +0000 UTC m=+83.894895960 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.114523 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.114608 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114658 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.114695 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114730 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.11470891 +0000 UTC m=+83.895143208 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114775 3561 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114778 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114793 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.114773172 +0000 UTC m=+83.895207480 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.114880 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.114961 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114993 3561 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.115036 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.114997 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.114963018 +0000 UTC m=+83.895397326 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.115092 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.115070191 +0000 UTC m=+83.895504529 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.115159 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.115234 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.115246 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.115228876 +0000 UTC m=+83.895663174 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.115278 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.115266537 +0000 UTC m=+83.895700825 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.115335 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.115238 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.115374 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.115443 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.115385181 +0000 UTC m=+83.895819539 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.115481 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.115510 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.115527 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.115514915 +0000 UTC m=+83.895949203 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.115631 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.115632 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.115601977 +0000 UTC m=+83.896036275 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.115694 3561 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.115806 3561 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.115730 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.115858 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.115823114 +0000 UTC m=+83.896257412 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.115732 3561 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.115930 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.115914727 +0000 UTC m=+83.896349025 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.115992 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116032 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.116006 +0000 UTC m=+83.896440378 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.116099 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116120 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116173 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.116157824 +0000 UTC m=+83.896592122 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.116175 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116212 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.116239 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116253 3561 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116258 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.116243207 +0000 UTC m=+83.896677605 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116316 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.116366 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116374 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116375 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.11635578 +0000 UTC m=+83.896790168 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116454 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.116423472 +0000 UTC m=+83.896857771 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116490 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.116472474 +0000 UTC m=+83.896906862 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.116579 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.116652 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116695 3561 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.116718 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116727 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116760 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.116744722 +0000 UTC m=+83.897179010 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116789 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.116774643 +0000 UTC m=+83.897208931 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116818 3561 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.116880 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116900 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.116879386 +0000 UTC m=+83.897313684 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.116960 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.116975 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117023 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.117008009 +0000 UTC m=+83.897442307 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.117027 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.117107 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117128 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117155 3561 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117179 3561 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117201 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.117179575 +0000 UTC m=+83.897613863 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117230 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.117217816 +0000 UTC m=+83.897652104 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117254 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.117242257 +0000 UTC m=+83.897676545 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.117292 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.117381 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.117429 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.117474 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117489 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.117523 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117596 3561 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117629 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117605 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.117529495 +0000 UTC m=+83.897963783 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117675 3561 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117697 3561 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117742 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.117729132 +0000 UTC m=+83.898163420 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117783 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117801 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117815 3561 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117866 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.117849965 +0000 UTC m=+83.898284253 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117896 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.117882776 +0000 UTC m=+83.898317064 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.117920 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.117908207 +0000 UTC m=+83.898342505 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.118077 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.118180 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.118226 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.118372 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.118408 3561 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.118499 3561 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.118439 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.118421533 +0000 UTC m=+83.898855821 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.118754 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.118810 3561 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.118816 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.118768983 +0000 UTC m=+83.899203281 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.118851 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.118841786 +0000 UTC m=+83.899276044 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.118866 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.118860216 +0000 UTC m=+83.899294474 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.118886 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.118909 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.118949 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.118969 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.118984 3561 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.119044 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.119048 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.119096 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.119626 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.119016621 +0000 UTC m=+83.899450919 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.119906 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.119898497 +0000 UTC m=+83.900332755 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.119920 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.119914118 +0000 UTC m=+83.900348376 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.119933 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.119927158 +0000 UTC m=+83.900361416 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.220744 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.220853 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.220922 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.220987 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.221104 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.221167 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.221239 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.221308 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.221378 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.221474 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.221661 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.221812 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.221989 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222040 3561 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.222093 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222158 3561 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222264 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222160 3561 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222394 3561 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222265 3561 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222049 3561 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222182 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.222152207 +0000 UTC m=+84.002586505 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222513 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222597 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.222527759 +0000 UTC m=+84.002962117 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222632 3561 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222647 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.222616462 +0000 UTC m=+84.003050760 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222049 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222718 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.222692554 +0000 UTC m=+84.003126882 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222760 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.222740585 +0000 UTC m=+84.003175003 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222353 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222802 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222808 3561 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.222720 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222722 3561 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222793 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.222776236 +0000 UTC m=+84.003210674 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222433 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222937 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.222916751 +0000 UTC m=+84.003351049 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.222976 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.222962712 +0000 UTC m=+84.003397010 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223000 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.222987603 +0000 UTC m=+84.003421891 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223024 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.223011074 +0000 UTC m=+84.003445372 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223050 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.223038515 +0000 UTC m=+84.003472813 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223081 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.223063785 +0000 UTC m=+84.003498223 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223115 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.223097016 +0000 UTC m=+84.003531434 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.223195 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.223269 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223298 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.223271022 +0000 UTC m=+84.003705310 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223350 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.223319192 +0000 UTC m=+84.003753490 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223363 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.223440 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223451 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.223395564 +0000 UTC m=+84.003829852 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.223511 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223529 3561 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223604 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.223591021 +0000 UTC m=+84.004025309 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.223607 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.223667 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223684 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.223719 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223738 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.223722335 +0000 UTC m=+84.004156733 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.223778 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223794 3561 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223841 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.223825998 +0000 UTC m=+84.004260286 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.223825 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223911 3561 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223940 3561 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223955 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.223942871 +0000 UTC m=+84.004377169 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.223984 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.223969332 +0000 UTC m=+84.004403620 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.224022 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.224038 3561 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.224100 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.224142 3561 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.224188 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.224174628 +0000 UTC m=+84.004608926 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.224146 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.224207 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.224258 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.224285 3561 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.224325 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.224313183 +0000 UTC m=+84.004747471 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.224331 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.224415 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.224428 3561 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.224484 3561 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.224579 3561 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.224487 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.224452747 +0000 UTC m=+84.004887065 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.224656 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.224626542 +0000 UTC m=+84.005060870 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.224727 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.224703995 +0000 UTC m=+84.005138403 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.224767 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.224744416 +0000 UTC m=+84.005178834 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.224834 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.224974 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.225049 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.225061 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.225181 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.225207 3561 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.225252 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.225452 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.225430207 +0000 UTC m=+84.005864555 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.225517 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.225598 3561 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.225732 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.225817 3561 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.225833 3561 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.225846 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.225895 3561 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.225942 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.225718606 +0000 UTC m=+84.006152904 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.226119 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.226032665 +0000 UTC m=+84.006466953 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.226161 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.226143718 +0000 UTC m=+84.006578006 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.226308 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.226383 3561 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.226439 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.226407736 +0000 UTC m=+84.006842034 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.226485 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.226465418 +0000 UTC m=+84.006899856 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.226530 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.226511559 +0000 UTC m=+84.006946007 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.226613 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.226585831 +0000 UTC m=+84.007020239 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.226679 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.226660824 +0000 UTC m=+84.007095202 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.226776 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.226850 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.226872 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.226839299 +0000 UTC m=+84.007273607 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.226980 3561 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.227007 3561 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.227070 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.227049666 +0000 UTC m=+84.007483964 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.227294 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.227264082 +0000 UTC m=+84.007698380 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.228597 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.228681 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.228748 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.228749 3561 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.228846 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.228860 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.22884126 +0000 UTC m=+84.009275548 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.228884 3561 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.228914 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.228937 3561 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.229018 3561 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.229041 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.229055 3561 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.229080 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.229024785 +0000 UTC m=+84.009459073 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.229107 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.229095777 +0000 UTC m=+84.009530075 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.229152 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.229297 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.229439 3561 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.229465 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.229503 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.229476949 +0000 UTC m=+84.009911237 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.229576 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.229536161 +0000 UTC m=+84.009970449 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.229613 3561 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.229619 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.229604313 +0000 UTC m=+84.010038721 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.229662 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.229648834 +0000 UTC m=+84.010083122 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.229734 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.229780 3561 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.229862 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.229829 +0000 UTC m=+84.010263298 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.229914 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.229895342 +0000 UTC m=+84.010329640 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.230017 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.230079 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.230130 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.230207 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.230241 3561 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.230255 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.230315 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.230295644 +0000 UTC m=+84.010729932 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.230347 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.230331115 +0000 UTC m=+84.010765403 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.230368 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.230374 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.230414 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.230397607 +0000 UTC m=+84.010831905 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.230438 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.230423978 +0000 UTC m=+84.010858276 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.334159 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.334249 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.334314 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.334498 3561 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.334567 3561 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.334573 3561 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.334597 3561 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.334623 3561 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.334644 3561 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.334683 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.334736 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.334743 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.334702359 +0000 UTC m=+84.115136697 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.334762 3561 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.334798 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.334774071 +0000 UTC m=+84.115208489 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.334848 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.334820183 +0000 UTC m=+84.115254481 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.435929 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.436109 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.436145 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.436203 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.436230 3561 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.436263 3561 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.436303 3561 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.436322 3561 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.436335 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.436304588 +0000 UTC m=+84.216738886 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.436391 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.43637115 +0000 UTC m=+84.216805438 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.438737 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.438902 3561 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.438934 3561 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.438950 3561 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.439092 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.439067622 +0000 UTC m=+84.219501920 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.540629 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.540778 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.540859 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.540861 3561 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.540933 3561 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.540972 3561 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.541025 3561 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.541081 3561 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.541090 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.541058274 +0000 UTC m=+84.321492572 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.541100 3561 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.541168 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.541149377 +0000 UTC m=+84.321583755 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.541207 3561 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.541228 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.541235 3561 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.541258 3561 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.541330 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.541304962 +0000 UTC m=+84.321739250 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.541372 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.541395 3561 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.541420 3561 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.541436 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.541482 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.541467847 +0000 UTC m=+84.321902275 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.541577 3561 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.541596 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.541782 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.541747075 +0000 UTC m=+84.322181373 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.644900 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.645064 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.645085 3561 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.645119 3561 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.645141 3561 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.645380 3561 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.645430 3561 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.645453 3561 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.645392 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.645349946 +0000 UTC m=+84.425784234 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.645865 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.645831341 +0000 UTC m=+84.426265759 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.663801 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.663876 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.663906 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.663936 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.663975 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.664044 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.664078 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.664112 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.664175 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.664202 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.663948 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.664355 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.664373 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.664464 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.664378 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.664677 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.664715 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.664947 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.665008 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.665145 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.665272 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.665331 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.665415 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.665453 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.665466 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.665533 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.665624 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.665621 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.665737 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.665843 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.665873 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.665962 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.665970 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.666079 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.666219 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.666286 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.666330 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.666393 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.666537 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.666702 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.666815 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.666951 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.667012 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.667096 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.667199 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.667246 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.667329 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.667418 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.667496 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.667620 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.667717 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.667846 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.667939 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.668043 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.668096 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.668246 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.668300 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.680227 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.680368 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.680468 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.680731 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.680938 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.681047 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.681297 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.681416 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.681494 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.681684 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.681849 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.681978 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.682096 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.682211 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.682312 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.682390 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.682490 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.736258 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:33 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:33 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:33 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.736379 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.750145 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.750224 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.750292 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.750369 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.750418 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.750441 3561 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.750520 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.750491773 +0000 UTC m=+84.530926071 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.750443 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.750625 3561 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.750654 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.750709 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.750743 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.750708249 +0000 UTC m=+84.531142597 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.750934 3561 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.750936 3561 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.750960 3561 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.750976 3561 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.751040 3561 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.751063 3561 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.750987 3561 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.751002 3561 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.751300 3561 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.751152 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.751124742 +0000 UTC m=+84.531559040 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.751408 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.751796 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.751898 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.751974 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.751933187 +0000 UTC m=+84.532367505 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.751979 3561 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.752031 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.752004949 +0000 UTC m=+84.532439407 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.752250 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.752214245 +0000 UTC m=+84.532648543 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.959107 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.959278 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.959367 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.959580 3561 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.959655 3561 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.959663 3561 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.959743 3561 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.959783 3561 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.959813 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.959866 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.959887 3561 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.959677 3561 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.960041 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.959991463 +0000 UTC m=+84.740425761 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.960090 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.960073985 +0000 UTC m=+84.740508283 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.960288 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.960249971 +0000 UTC m=+84.740684289 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.962143 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:33 crc kubenswrapper[3561]: I1203 00:07:33.962300 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.962431 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.962478 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.962496 3561 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.962610 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.962590841 +0000 UTC m=+84.743025139 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.963346 3561 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.963391 3561 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.963418 3561 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:33 crc kubenswrapper[3561]: E1203 00:07:33.963593 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:08:05.96352752 +0000 UTC m=+84.743961808 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.064251 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.064337 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.064468 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.064514 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.064524 3561 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.064490 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.064602 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:06.064584143 +0000 UTC m=+84.845018401 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.064644 3561 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.064683 3561 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.064703 3561 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.064711 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.064777 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:06.064754969 +0000 UTC m=+84.845189297 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.064717 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.064845 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.064865 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.064785 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.064888 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.064652 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.064919 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-12-03 00:08:06.064911223 +0000 UTC m=+84.845345481 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.065120 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:06.065077819 +0000 UTC m=+84.845512127 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.170209 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.170270 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.170615 3561 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.170725 3561 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.170786 3561 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.170808 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.170928 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.170950 3561 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.170890 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-03 00:08:06.170874345 +0000 UTC m=+84.951308603 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.171142 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:08:06.171113132 +0000 UTC m=+84.951547400 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.274675 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.274914 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.275312 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.275579 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.275729 3561 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.275945 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-12-03 00:08:06.27591232 +0000 UTC m=+85.056346608 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.275333 3561 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.276280 3561 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.276430 3561 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.276639 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:06.27661955 +0000 UTC m=+85.057053848 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.324152 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.376194 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.376313 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.376494 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.376528 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.376571 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.376584 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.376635 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.376660 3561 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.376882 3561 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.376984 3561 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.377032 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:06.377004353 +0000 UTC m=+85.157438651 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.377069 3561 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.376599 3561 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.377407 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:06.377336363 +0000 UTC m=+85.157770661 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.377500 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-03 00:08:06.377469797 +0000 UTC m=+85.157904195 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.381978 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.382111 3561 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.382145 3561 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.382162 3561 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.382232 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:06.382212792 +0000 UTC m=+85.162647080 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.483147 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.483234 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.483374 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.483374 3561 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.483439 3561 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.483510 3561 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.483646 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:06.483614435 +0000 UTC m=+85.264048773 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.483792 3561 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.483835 3561 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.483861 3561 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.483897 3561 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.483949 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.484049 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:06.484022547 +0000 UTC m=+85.264456835 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.484078 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:06.484065919 +0000 UTC m=+85.264500217 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.663522 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.663614 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.663701 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.663713 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.663648 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.663930 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.663948 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.663980 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.664106 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.664129 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.664212 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.664374 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.664418 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.664582 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.664781 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.664964 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.665206 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.665343 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.665517 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:34 crc kubenswrapper[3561]: E1203 00:07:34.665696 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.735754 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:34 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:34 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:34 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:34 crc kubenswrapper[3561]: I1203 00:07:34.735832 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.663920 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.663990 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.664000 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.664046 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.663949 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.664087 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.664157 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.663947 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.663922 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.664383 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.664389 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.664484 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.664487 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.664535 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.664672 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.664701 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.664888 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.664896 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.664928 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.665115 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.665117 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.665234 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.665397 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.665427 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.665535 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.665759 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.665807 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.665861 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.665808 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.665901 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.666060 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.666068 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.666307 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.666327 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.666505 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.666662 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.666797 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.667011 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.667072 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.667113 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.667142 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.667250 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.667342 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.667472 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.667496 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.667862 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.667912 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.668075 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.668161 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.668211 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.668261 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.668362 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.668414 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.668459 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.668598 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.668603 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.668729 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.668888 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.668937 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.669046 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.669213 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.669531 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.669629 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.669755 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.669835 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.669894 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.670038 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.670235 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.670447 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.670486 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.670741 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.670775 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.670899 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:35 crc kubenswrapper[3561]: E1203 00:07:35.671103 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.736173 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:35 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:35 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:35 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:35 crc kubenswrapper[3561]: I1203 00:07:35.736296 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:36 crc kubenswrapper[3561]: I1203 00:07:36.663890 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:36 crc kubenswrapper[3561]: I1203 00:07:36.664020 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:36 crc kubenswrapper[3561]: I1203 00:07:36.664028 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:36 crc kubenswrapper[3561]: I1203 00:07:36.664089 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:36 crc kubenswrapper[3561]: I1203 00:07:36.664096 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:36 crc kubenswrapper[3561]: I1203 00:07:36.664154 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:36 crc kubenswrapper[3561]: I1203 00:07:36.663890 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:36 crc kubenswrapper[3561]: I1203 00:07:36.664122 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:36 crc kubenswrapper[3561]: I1203 00:07:36.664206 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:36 crc kubenswrapper[3561]: I1203 00:07:36.663963 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:36 crc kubenswrapper[3561]: E1203 00:07:36.664404 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:36 crc kubenswrapper[3561]: E1203 00:07:36.664698 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:36 crc kubenswrapper[3561]: E1203 00:07:36.664912 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:36 crc kubenswrapper[3561]: E1203 00:07:36.665062 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:36 crc kubenswrapper[3561]: E1203 00:07:36.665329 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:36 crc kubenswrapper[3561]: E1203 00:07:36.665490 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:36 crc kubenswrapper[3561]: E1203 00:07:36.665584 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:36 crc kubenswrapper[3561]: E1203 00:07:36.665729 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:36 crc kubenswrapper[3561]: E1203 00:07:36.665822 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:36 crc kubenswrapper[3561]: E1203 00:07:36.665955 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:36 crc kubenswrapper[3561]: E1203 00:07:36.687198 3561 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 00:07:36 crc kubenswrapper[3561]: I1203 00:07:36.736227 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:36 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:36 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:36 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:36 crc kubenswrapper[3561]: I1203 00:07:36.736366 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.663696 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.663831 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.663928 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.663953 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.663997 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.664082 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.664636 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.664686 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.664696 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.664776 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.664811 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.664815 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.664705 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.664739 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.664737 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.664760 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.664791 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.664964 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.665079 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.664879 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.665126 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.665157 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.665199 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.665127 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.665257 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.665450 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.665510 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.665522 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.665645 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.665673 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.665697 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.665734 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.665775 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.666158 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.666375 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.666408 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.666491 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.666579 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.666607 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.666584 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.666690 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.666739 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.666943 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.667140 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.667201 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.667400 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.667595 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.667813 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.668022 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.668079 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.668359 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.668769 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.668871 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.668996 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.669073 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.669216 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.669516 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.669781 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.669974 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.670148 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.670367 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.670532 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.670727 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.670885 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.670995 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.671153 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.671315 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.671469 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.671895 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.672183 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.672249 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.672351 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.672475 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:37 crc kubenswrapper[3561]: E1203 00:07:37.672655 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.736269 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:37 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:37 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:37 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:37 crc kubenswrapper[3561]: I1203 00:07:37.736379 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:38 crc kubenswrapper[3561]: I1203 00:07:38.664377 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:38 crc kubenswrapper[3561]: I1203 00:07:38.664503 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:38 crc kubenswrapper[3561]: I1203 00:07:38.664597 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:38 crc kubenswrapper[3561]: I1203 00:07:38.664412 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:38 crc kubenswrapper[3561]: I1203 00:07:38.664634 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:38 crc kubenswrapper[3561]: I1203 00:07:38.664682 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:38 crc kubenswrapper[3561]: E1203 00:07:38.664706 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:38 crc kubenswrapper[3561]: I1203 00:07:38.664735 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:38 crc kubenswrapper[3561]: I1203 00:07:38.664722 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:38 crc kubenswrapper[3561]: I1203 00:07:38.664366 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:38 crc kubenswrapper[3561]: E1203 00:07:38.664961 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:38 crc kubenswrapper[3561]: I1203 00:07:38.665011 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:38 crc kubenswrapper[3561]: E1203 00:07:38.665332 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:38 crc kubenswrapper[3561]: E1203 00:07:38.665462 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:38 crc kubenswrapper[3561]: E1203 00:07:38.665679 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:38 crc kubenswrapper[3561]: E1203 00:07:38.665858 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:38 crc kubenswrapper[3561]: E1203 00:07:38.666032 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:38 crc kubenswrapper[3561]: E1203 00:07:38.666240 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:38 crc kubenswrapper[3561]: E1203 00:07:38.666339 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:38 crc kubenswrapper[3561]: E1203 00:07:38.666470 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:38 crc kubenswrapper[3561]: I1203 00:07:38.735637 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:38 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:38 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:38 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:38 crc kubenswrapper[3561]: I1203 00:07:38.735771 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.663768 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.663812 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.663811 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.663898 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.663946 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.663954 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.664059 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.664106 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.664120 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.664090 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.664160 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.664079 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.664300 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.664325 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.664339 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.664368 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.664374 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.664464 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.664702 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.664740 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.664762 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.664844 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.664845 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.664986 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.665120 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.665176 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.665327 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.665392 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.665440 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.665483 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.665572 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.665581 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.665635 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.665651 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.665768 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.665884 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.665932 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.666012 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.666103 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.666155 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.666240 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.666360 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.666362 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.666490 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.666633 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.666741 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.666858 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.666903 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.666985 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.667074 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.667115 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.667237 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.667288 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.667315 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.667390 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.667501 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.667577 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.667701 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.667855 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.667931 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.667981 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.668084 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.668210 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.668321 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.668408 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.668618 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.668754 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.669112 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.669496 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.673334 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.673910 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.674379 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.674995 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:39 crc kubenswrapper[3561]: E1203 00:07:39.677319 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.736334 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:39 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:39 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:39 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:39 crc kubenswrapper[3561]: I1203 00:07:39.736998 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:40 crc kubenswrapper[3561]: I1203 00:07:40.664639 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:40 crc kubenswrapper[3561]: I1203 00:07:40.664695 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:40 crc kubenswrapper[3561]: I1203 00:07:40.664647 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:40 crc kubenswrapper[3561]: I1203 00:07:40.664744 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:40 crc kubenswrapper[3561]: I1203 00:07:40.664872 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:40 crc kubenswrapper[3561]: E1203 00:07:40.664967 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:40 crc kubenswrapper[3561]: I1203 00:07:40.665008 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:40 crc kubenswrapper[3561]: E1203 00:07:40.665379 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:40 crc kubenswrapper[3561]: I1203 00:07:40.665684 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:40 crc kubenswrapper[3561]: E1203 00:07:40.665735 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:40 crc kubenswrapper[3561]: I1203 00:07:40.665783 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:40 crc kubenswrapper[3561]: E1203 00:07:40.665853 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:40 crc kubenswrapper[3561]: E1203 00:07:40.665937 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:40 crc kubenswrapper[3561]: I1203 00:07:40.665989 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:40 crc kubenswrapper[3561]: E1203 00:07:40.666069 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:40 crc kubenswrapper[3561]: I1203 00:07:40.666106 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:40 crc kubenswrapper[3561]: E1203 00:07:40.666172 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:40 crc kubenswrapper[3561]: E1203 00:07:40.666225 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:40 crc kubenswrapper[3561]: E1203 00:07:40.666283 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:40 crc kubenswrapper[3561]: E1203 00:07:40.666347 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:40 crc kubenswrapper[3561]: I1203 00:07:40.749009 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:40 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:40 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:40 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:40 crc kubenswrapper[3561]: I1203 00:07:40.749106 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.498896 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.499393 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.499613 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.499830 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.500017 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.664802 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.664871 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.664808 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.664820 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.668855 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.668872 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.668987 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669006 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669032 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669083 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669140 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.669233 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669267 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669321 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.669424 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669458 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669585 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669628 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669630 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669653 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669640 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669658 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669688 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669694 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669708 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669721 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669720 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669730 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669768 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669699 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669779 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669792 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669793 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669795 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669804 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669777 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669829 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669746 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669858 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669885 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.669776 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.669989 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.670451 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.670487 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.670803 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.671047 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.671127 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.671277 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.671339 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.671422 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.671493 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.671590 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.671719 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.671842 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.672016 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.672084 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.672140 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.672195 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.672253 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.672296 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.672501 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.672550 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.672639 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.672341 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.672459 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.672464 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.672501 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.673024 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.672723 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.672832 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.672907 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.673115 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.673303 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.673382 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:41 crc kubenswrapper[3561]: E1203 00:07:41.688374 3561 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.735599 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:41 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:41 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:41 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:41 crc kubenswrapper[3561]: I1203 00:07:41.735689 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:42 crc kubenswrapper[3561]: I1203 00:07:42.664317 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:42 crc kubenswrapper[3561]: I1203 00:07:42.664402 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:42 crc kubenswrapper[3561]: I1203 00:07:42.664528 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:42 crc kubenswrapper[3561]: I1203 00:07:42.664636 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:42 crc kubenswrapper[3561]: E1203 00:07:42.664694 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:42 crc kubenswrapper[3561]: I1203 00:07:42.664711 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:42 crc kubenswrapper[3561]: I1203 00:07:42.664735 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:42 crc kubenswrapper[3561]: I1203 00:07:42.664759 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:42 crc kubenswrapper[3561]: I1203 00:07:42.664789 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:42 crc kubenswrapper[3561]: E1203 00:07:42.664993 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:42 crc kubenswrapper[3561]: E1203 00:07:42.665176 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:42 crc kubenswrapper[3561]: I1203 00:07:42.665259 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:42 crc kubenswrapper[3561]: E1203 00:07:42.665349 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:42 crc kubenswrapper[3561]: I1203 00:07:42.665406 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:42 crc kubenswrapper[3561]: E1203 00:07:42.665592 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:42 crc kubenswrapper[3561]: E1203 00:07:42.665696 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:42 crc kubenswrapper[3561]: E1203 00:07:42.665806 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:42 crc kubenswrapper[3561]: E1203 00:07:42.665957 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:42 crc kubenswrapper[3561]: E1203 00:07:42.666076 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:42 crc kubenswrapper[3561]: E1203 00:07:42.666279 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:42 crc kubenswrapper[3561]: I1203 00:07:42.736016 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:42 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:42 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:42 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:42 crc kubenswrapper[3561]: I1203 00:07:42.736132 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.663894 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.664004 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.663894 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.664222 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.664254 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.664280 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.664358 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.664406 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.664478 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.664489 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.664503 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.664638 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.664699 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.664719 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.664747 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.664810 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.665179 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.665264 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.665296 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.665323 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.665377 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.665441 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.665506 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.665443 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.665505 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.665603 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.665730 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.665837 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.665848 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.665889 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.665916 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.665935 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.665889 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.665998 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.666021 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.666115 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.666159 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.666030 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.666299 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.666210 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.666421 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.666449 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.666386 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.666635 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.666762 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.666838 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.666932 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.667048 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.667251 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.667258 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.667351 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.667443 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.667583 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.667644 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.667772 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.667896 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.668087 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.668250 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.668355 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.668471 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.668621 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.668754 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.668892 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.668974 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.669038 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.669079 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.669195 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.669340 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.669502 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.669705 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.669862 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.669973 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.670090 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:43 crc kubenswrapper[3561]: E1203 00:07:43.670187 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.735988 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:43 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:43 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:43 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:43 crc kubenswrapper[3561]: I1203 00:07:43.736087 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:44 crc kubenswrapper[3561]: I1203 00:07:44.663715 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:44 crc kubenswrapper[3561]: I1203 00:07:44.664075 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:44 crc kubenswrapper[3561]: I1203 00:07:44.663743 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:44 crc kubenswrapper[3561]: E1203 00:07:44.664356 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:44 crc kubenswrapper[3561]: I1203 00:07:44.663786 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:44 crc kubenswrapper[3561]: I1203 00:07:44.663837 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:44 crc kubenswrapper[3561]: I1203 00:07:44.663860 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:44 crc kubenswrapper[3561]: I1203 00:07:44.663868 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:44 crc kubenswrapper[3561]: I1203 00:07:44.663905 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:44 crc kubenswrapper[3561]: E1203 00:07:44.670600 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:44 crc kubenswrapper[3561]: I1203 00:07:44.663996 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:44 crc kubenswrapper[3561]: I1203 00:07:44.664035 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:44 crc kubenswrapper[3561]: E1203 00:07:44.670873 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:44 crc kubenswrapper[3561]: E1203 00:07:44.671011 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:44 crc kubenswrapper[3561]: E1203 00:07:44.671200 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:44 crc kubenswrapper[3561]: E1203 00:07:44.671448 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:44 crc kubenswrapper[3561]: E1203 00:07:44.671649 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:44 crc kubenswrapper[3561]: E1203 00:07:44.671855 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:44 crc kubenswrapper[3561]: E1203 00:07:44.672125 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:44 crc kubenswrapper[3561]: E1203 00:07:44.672314 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:44 crc kubenswrapper[3561]: I1203 00:07:44.735603 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:44 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:44 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:44 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:44 crc kubenswrapper[3561]: I1203 00:07:44.735736 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.663864 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.663932 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.663955 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664050 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664075 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664129 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664184 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664217 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664285 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664125 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664255 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664366 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664397 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664391 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664429 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664296 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664325 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664445 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664500 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.664412 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664507 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664611 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664260 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664666 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664736 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664746 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.664674 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664809 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664818 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664678 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664926 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664955 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.664929 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.664994 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.665216 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.665236 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.665338 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.665409 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.665490 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.665680 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.665797 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.665858 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.665870 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.666119 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.666184 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.666216 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.666241 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.666294 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.666453 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.666641 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.666732 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.666907 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.667038 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.667114 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.667361 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.667409 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.667468 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.667670 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.667777 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.667957 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.667974 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.668049 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.668288 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.668316 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.668425 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.668505 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.668641 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.668758 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.668849 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.668928 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.669175 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.669241 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.669446 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:45 crc kubenswrapper[3561]: E1203 00:07:45.669627 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.735893 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:45 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:45 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:45 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:45 crc kubenswrapper[3561]: I1203 00:07:45.736035 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:46 crc kubenswrapper[3561]: I1203 00:07:46.663738 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:46 crc kubenswrapper[3561]: I1203 00:07:46.664030 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:46 crc kubenswrapper[3561]: I1203 00:07:46.663782 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:46 crc kubenswrapper[3561]: I1203 00:07:46.664223 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:46 crc kubenswrapper[3561]: I1203 00:07:46.663788 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:46 crc kubenswrapper[3561]: I1203 00:07:46.663858 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:46 crc kubenswrapper[3561]: I1203 00:07:46.663857 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:46 crc kubenswrapper[3561]: I1203 00:07:46.663944 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:46 crc kubenswrapper[3561]: I1203 00:07:46.663957 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:46 crc kubenswrapper[3561]: E1203 00:07:46.664509 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:46 crc kubenswrapper[3561]: I1203 00:07:46.663970 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:46 crc kubenswrapper[3561]: E1203 00:07:46.664736 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:46 crc kubenswrapper[3561]: E1203 00:07:46.664936 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:46 crc kubenswrapper[3561]: E1203 00:07:46.665141 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:46 crc kubenswrapper[3561]: E1203 00:07:46.665352 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:46 crc kubenswrapper[3561]: E1203 00:07:46.665515 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:46 crc kubenswrapper[3561]: E1203 00:07:46.665660 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:46 crc kubenswrapper[3561]: E1203 00:07:46.665895 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:46 crc kubenswrapper[3561]: E1203 00:07:46.666044 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:46 crc kubenswrapper[3561]: E1203 00:07:46.666260 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:46 crc kubenswrapper[3561]: E1203 00:07:46.689345 3561 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 00:07:46 crc kubenswrapper[3561]: I1203 00:07:46.735817 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:46 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:46 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:46 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:46 crc kubenswrapper[3561]: I1203 00:07:46.735908 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.664454 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.664535 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.664625 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.664712 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.664755 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.664859 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.664867 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.664919 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.664960 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.664989 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.665000 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.665027 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.665039 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.664877 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.665059 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.665074 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.664754 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.665290 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.665323 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.665352 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.665494 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.665663 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.665807 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.665875 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.666002 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.666161 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.666226 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.666376 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.666386 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.666485 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.666501 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.666647 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.666724 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.666739 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.666807 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.666917 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.667272 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.667368 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.667653 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.667728 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.668086 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.668353 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.668498 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.668798 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.668983 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.669128 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.669377 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.669768 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.669878 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.669909 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.669970 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.669972 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.670123 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.670219 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.670345 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.670492 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.670619 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.670723 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.670785 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.670923 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.671046 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.671141 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.671272 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.671434 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.671526 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.671723 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.671898 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.672080 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.672297 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.672475 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.672718 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.672963 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.673159 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:47 crc kubenswrapper[3561]: E1203 00:07:47.673317 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.736361 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:47 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:47 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:47 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:47 crc kubenswrapper[3561]: I1203 00:07:47.736470 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:48 crc kubenswrapper[3561]: I1203 00:07:48.663758 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:48 crc kubenswrapper[3561]: I1203 00:07:48.663800 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:48 crc kubenswrapper[3561]: I1203 00:07:48.663900 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:48 crc kubenswrapper[3561]: I1203 00:07:48.663988 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:48 crc kubenswrapper[3561]: I1203 00:07:48.664084 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:48 crc kubenswrapper[3561]: I1203 00:07:48.664122 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:48 crc kubenswrapper[3561]: E1203 00:07:48.664208 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:48 crc kubenswrapper[3561]: I1203 00:07:48.664201 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:48 crc kubenswrapper[3561]: I1203 00:07:48.664365 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:48 crc kubenswrapper[3561]: I1203 00:07:48.664431 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:48 crc kubenswrapper[3561]: E1203 00:07:48.664443 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:48 crc kubenswrapper[3561]: I1203 00:07:48.664472 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:48 crc kubenswrapper[3561]: E1203 00:07:48.664495 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:48 crc kubenswrapper[3561]: E1203 00:07:48.664754 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:48 crc kubenswrapper[3561]: E1203 00:07:48.664778 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:48 crc kubenswrapper[3561]: E1203 00:07:48.664904 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:48 crc kubenswrapper[3561]: E1203 00:07:48.665393 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:48 crc kubenswrapper[3561]: E1203 00:07:48.665423 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:48 crc kubenswrapper[3561]: E1203 00:07:48.665527 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:48 crc kubenswrapper[3561]: E1203 00:07:48.665757 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:48 crc kubenswrapper[3561]: I1203 00:07:48.735336 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:48 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:48 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:48 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:48 crc kubenswrapper[3561]: I1203 00:07:48.735520 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.663510 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.663614 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.663731 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.663813 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.663841 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.663890 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.664007 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.664067 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.664133 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.664217 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.664263 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.664321 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.664386 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.664448 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.664599 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.664622 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.664679 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.664693 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.664785 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.664824 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.664878 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.664896 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.664974 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.665047 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.665125 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.665207 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.665257 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.665331 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.665374 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.665439 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.665508 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.665567 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.665638 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.665704 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.665747 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.665886 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.666016 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.666235 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.666351 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.666478 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.666771 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.667010 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.667331 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.667488 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.667807 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.668029 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.668640 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.668644 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.663582 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.668786 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.663512 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.668866 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.668940 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.669014 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.669052 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.669142 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.669224 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.669268 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.669330 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.669407 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.669446 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.669514 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.669592 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.669642 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.669691 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.669738 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.669785 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.669859 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.669935 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.669997 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.670046 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.670106 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.670181 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:49 crc kubenswrapper[3561]: E1203 00:07:49.670251 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.735888 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:49 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:49 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:49 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:49 crc kubenswrapper[3561]: I1203 00:07:49.735998 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:50 crc kubenswrapper[3561]: I1203 00:07:50.227811 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/6.log" Dec 03 00:07:50 crc kubenswrapper[3561]: I1203 00:07:50.228208 3561 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="25e251c91f998883cec92448e57ffcbd0f46f7190f3879fe24b99ae2240a1795" exitCode=1 Dec 03 00:07:50 crc kubenswrapper[3561]: I1203 00:07:50.228295 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"25e251c91f998883cec92448e57ffcbd0f46f7190f3879fe24b99ae2240a1795"} Dec 03 00:07:50 crc kubenswrapper[3561]: I1203 00:07:50.229268 3561 scope.go:117] "RemoveContainer" containerID="25e251c91f998883cec92448e57ffcbd0f46f7190f3879fe24b99ae2240a1795" Dec 03 00:07:50 crc kubenswrapper[3561]: I1203 00:07:50.663469 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:50 crc kubenswrapper[3561]: I1203 00:07:50.663666 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:50 crc kubenswrapper[3561]: I1203 00:07:50.663708 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:50 crc kubenswrapper[3561]: E1203 00:07:50.663803 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:50 crc kubenswrapper[3561]: I1203 00:07:50.663810 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:50 crc kubenswrapper[3561]: I1203 00:07:50.663855 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:50 crc kubenswrapper[3561]: I1203 00:07:50.663902 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:50 crc kubenswrapper[3561]: I1203 00:07:50.663930 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:50 crc kubenswrapper[3561]: E1203 00:07:50.664070 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:50 crc kubenswrapper[3561]: I1203 00:07:50.664097 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:50 crc kubenswrapper[3561]: E1203 00:07:50.664317 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:50 crc kubenswrapper[3561]: I1203 00:07:50.664473 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:50 crc kubenswrapper[3561]: E1203 00:07:50.664627 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:50 crc kubenswrapper[3561]: E1203 00:07:50.664834 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:50 crc kubenswrapper[3561]: E1203 00:07:50.665010 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:50 crc kubenswrapper[3561]: E1203 00:07:50.665205 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:50 crc kubenswrapper[3561]: E1203 00:07:50.665364 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:50 crc kubenswrapper[3561]: E1203 00:07:50.665466 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:50 crc kubenswrapper[3561]: I1203 00:07:50.663521 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:50 crc kubenswrapper[3561]: E1203 00:07:50.666718 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:50 crc kubenswrapper[3561]: I1203 00:07:50.736821 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:50 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:50 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:50 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:50 crc kubenswrapper[3561]: I1203 00:07:50.736965 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.236708 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/6.log" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.237049 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"83e5851fa9757464d7d57e36e5eb573f39fcbee9a3bd0805c37da4e2998af6a2"} Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.664627 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.664654 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.664685 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.664728 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.664754 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.664781 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.664803 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.664773 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.664835 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.664870 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.664876 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.664923 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.664989 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.665003 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.664948 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.665026 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.665000 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.665041 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.665028 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.665004 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.665067 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.665097 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.665103 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.665109 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.664958 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.665138 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.665073 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.666932 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.666967 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.667105 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.667236 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.667283 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.667336 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.667482 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.667501 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.667670 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.667779 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.667865 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.667964 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.668052 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.668151 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.668256 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.668319 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.668411 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.668494 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.668561 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.668589 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.668758 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.668856 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.668906 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.668958 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.669048 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.669120 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.669187 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.669251 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.669314 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.669378 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.669425 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.669479 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.669830 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.669859 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.669971 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.670036 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.670095 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.670225 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.670310 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.670456 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.670600 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.670761 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.670836 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.670885 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.670997 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.671185 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.671207 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:51 crc kubenswrapper[3561]: E1203 00:07:51.690914 3561 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.736703 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:51 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:51 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:51 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:51 crc kubenswrapper[3561]: I1203 00:07:51.736814 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:52 crc kubenswrapper[3561]: I1203 00:07:52.664226 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:52 crc kubenswrapper[3561]: I1203 00:07:52.664369 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:52 crc kubenswrapper[3561]: I1203 00:07:52.664432 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:52 crc kubenswrapper[3561]: E1203 00:07:52.664526 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:52 crc kubenswrapper[3561]: I1203 00:07:52.664568 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:52 crc kubenswrapper[3561]: I1203 00:07:52.664664 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:52 crc kubenswrapper[3561]: I1203 00:07:52.664673 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:52 crc kubenswrapper[3561]: I1203 00:07:52.664740 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:52 crc kubenswrapper[3561]: E1203 00:07:52.664812 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:52 crc kubenswrapper[3561]: E1203 00:07:52.665005 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:52 crc kubenswrapper[3561]: I1203 00:07:52.665132 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:52 crc kubenswrapper[3561]: E1203 00:07:52.665273 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:52 crc kubenswrapper[3561]: E1203 00:07:52.665435 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:52 crc kubenswrapper[3561]: E1203 00:07:52.665768 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:52 crc kubenswrapper[3561]: I1203 00:07:52.665811 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:52 crc kubenswrapper[3561]: I1203 00:07:52.665791 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:52 crc kubenswrapper[3561]: E1203 00:07:52.665896 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:52 crc kubenswrapper[3561]: E1203 00:07:52.666017 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:52 crc kubenswrapper[3561]: E1203 00:07:52.666217 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:52 crc kubenswrapper[3561]: E1203 00:07:52.666371 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:52 crc kubenswrapper[3561]: I1203 00:07:52.735138 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:52 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:52 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:52 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:52 crc kubenswrapper[3561]: I1203 00:07:52.735275 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.664267 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.664308 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.664348 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.664269 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.664527 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.664582 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.664617 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.664758 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.664785 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.664832 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.664850 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.664796 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.664794 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.664938 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.665083 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.665103 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.665110 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.665147 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.665213 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.665261 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.665272 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.665312 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.665404 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.665582 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.665681 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.665766 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.665839 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.665995 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.666020 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.666110 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.666263 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.666506 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.666668 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.666858 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.666948 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.667003 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.667089 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.667196 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.667298 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.667314 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.667457 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.667458 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.667561 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.667604 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.667605 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.667732 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.667810 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.667809 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.667857 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.667918 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.667947 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.668061 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.668104 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.668236 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.668308 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.668364 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.668422 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.668519 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.668661 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.668800 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.668878 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.668983 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.669109 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.669475 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.669636 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.669876 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.669944 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.670013 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.670047 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.670124 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.670265 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.670431 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.670535 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:53 crc kubenswrapper[3561]: E1203 00:07:53.670676 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.735952 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:53 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:53 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:53 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:53 crc kubenswrapper[3561]: I1203 00:07:53.736532 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:54 crc kubenswrapper[3561]: I1203 00:07:54.664235 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:54 crc kubenswrapper[3561]: I1203 00:07:54.664235 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:54 crc kubenswrapper[3561]: E1203 00:07:54.664942 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:54 crc kubenswrapper[3561]: I1203 00:07:54.664296 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:54 crc kubenswrapper[3561]: I1203 00:07:54.664294 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:54 crc kubenswrapper[3561]: E1203 00:07:54.665175 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:54 crc kubenswrapper[3561]: I1203 00:07:54.664332 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:54 crc kubenswrapper[3561]: I1203 00:07:54.664336 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:54 crc kubenswrapper[3561]: I1203 00:07:54.664375 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:54 crc kubenswrapper[3561]: I1203 00:07:54.664380 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:54 crc kubenswrapper[3561]: E1203 00:07:54.665305 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:54 crc kubenswrapper[3561]: I1203 00:07:54.664474 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:54 crc kubenswrapper[3561]: I1203 00:07:54.664480 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:54 crc kubenswrapper[3561]: E1203 00:07:54.665413 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:54 crc kubenswrapper[3561]: E1203 00:07:54.665615 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:54 crc kubenswrapper[3561]: E1203 00:07:54.665737 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:54 crc kubenswrapper[3561]: E1203 00:07:54.665938 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:54 crc kubenswrapper[3561]: E1203 00:07:54.666069 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:54 crc kubenswrapper[3561]: E1203 00:07:54.666271 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:54 crc kubenswrapper[3561]: E1203 00:07:54.666436 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:54 crc kubenswrapper[3561]: I1203 00:07:54.736493 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:54 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:54 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:54 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:54 crc kubenswrapper[3561]: I1203 00:07:54.736629 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664135 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664206 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664296 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664379 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.664405 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664311 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664504 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664639 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664442 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664650 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664696 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664389 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664589 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664797 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664805 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.664804 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664657 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664537 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664911 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.664525 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.665004 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.665060 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.665116 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.665120 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.665192 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.665214 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.665057 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.665238 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.665301 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.665309 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.665193 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.665418 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.665483 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.665489 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.665493 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.665575 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.665618 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.665534 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.665688 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.665771 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.665928 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.666156 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.666379 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.666519 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.666705 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.666841 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.666842 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.666905 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.667029 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.667143 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.667281 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.667446 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.667564 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.667724 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.667785 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.667802 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.667944 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.668092 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.668190 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.668305 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.668381 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.668649 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.668783 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.668878 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.669076 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.669160 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.669174 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.669281 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.669299 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.669433 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.670219 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.670265 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.670297 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:55 crc kubenswrapper[3561]: E1203 00:07:55.670469 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.735362 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:55 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:55 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:55 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:55 crc kubenswrapper[3561]: I1203 00:07:55.735456 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:56 crc kubenswrapper[3561]: I1203 00:07:56.664634 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:56 crc kubenswrapper[3561]: I1203 00:07:56.664765 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:56 crc kubenswrapper[3561]: I1203 00:07:56.664796 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:56 crc kubenswrapper[3561]: I1203 00:07:56.664882 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:56 crc kubenswrapper[3561]: I1203 00:07:56.664806 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:56 crc kubenswrapper[3561]: I1203 00:07:56.664905 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:56 crc kubenswrapper[3561]: I1203 00:07:56.664967 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:56 crc kubenswrapper[3561]: I1203 00:07:56.664961 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:56 crc kubenswrapper[3561]: E1203 00:07:56.665642 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:56 crc kubenswrapper[3561]: E1203 00:07:56.665004 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:56 crc kubenswrapper[3561]: I1203 00:07:56.665028 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:56 crc kubenswrapper[3561]: E1203 00:07:56.665851 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:56 crc kubenswrapper[3561]: E1203 00:07:56.665471 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:56 crc kubenswrapper[3561]: E1203 00:07:56.666097 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:56 crc kubenswrapper[3561]: E1203 00:07:56.666272 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:56 crc kubenswrapper[3561]: E1203 00:07:56.666382 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:56 crc kubenswrapper[3561]: E1203 00:07:56.666516 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:56 crc kubenswrapper[3561]: E1203 00:07:56.666697 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:56 crc kubenswrapper[3561]: I1203 00:07:56.666875 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:56 crc kubenswrapper[3561]: E1203 00:07:56.667060 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:56 crc kubenswrapper[3561]: E1203 00:07:56.692924 3561 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 00:07:56 crc kubenswrapper[3561]: I1203 00:07:56.735861 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:56 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:56 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:56 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:56 crc kubenswrapper[3561]: I1203 00:07:56.735972 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.663665 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.663804 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.663839 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.663974 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664026 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664060 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664086 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664074 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664126 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664096 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664164 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664189 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664133 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664038 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664002 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664359 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.664357 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664032 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664052 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.663983 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664096 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.663999 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664144 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.664525 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664176 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.664725 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664008 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664783 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664173 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664829 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.664833 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.665016 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.665083 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.665250 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.665265 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.665410 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.665614 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.665652 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.665673 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.665791 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.665847 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.665856 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.666007 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.666110 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.666217 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.666431 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.666716 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.666851 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.666873 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.666876 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.666993 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.667128 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.667314 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.667451 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.667495 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.667635 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.667795 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.667862 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.667993 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.668170 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.668226 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.668298 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.668482 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.668699 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.668825 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.668920 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.669094 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.669256 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.669310 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.669412 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.669673 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.669787 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.670518 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:57 crc kubenswrapper[3561]: E1203 00:07:57.670791 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.736150 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:57 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:57 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:57 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:57 crc kubenswrapper[3561]: I1203 00:07:57.736280 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:58 crc kubenswrapper[3561]: I1203 00:07:58.663600 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:07:58 crc kubenswrapper[3561]: I1203 00:07:58.663729 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:07:58 crc kubenswrapper[3561]: I1203 00:07:58.663745 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:07:58 crc kubenswrapper[3561]: I1203 00:07:58.663819 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:07:58 crc kubenswrapper[3561]: I1203 00:07:58.663857 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:07:58 crc kubenswrapper[3561]: E1203 00:07:58.663954 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:07:58 crc kubenswrapper[3561]: I1203 00:07:58.663759 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:07:58 crc kubenswrapper[3561]: I1203 00:07:58.664008 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:07:58 crc kubenswrapper[3561]: E1203 00:07:58.664064 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:07:58 crc kubenswrapper[3561]: I1203 00:07:58.664132 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:07:58 crc kubenswrapper[3561]: E1203 00:07:58.664203 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:07:58 crc kubenswrapper[3561]: I1203 00:07:58.664238 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:07:58 crc kubenswrapper[3561]: E1203 00:07:58.664324 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:07:58 crc kubenswrapper[3561]: E1203 00:07:58.664397 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:07:58 crc kubenswrapper[3561]: I1203 00:07:58.664427 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:07:58 crc kubenswrapper[3561]: E1203 00:07:58.664501 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:07:58 crc kubenswrapper[3561]: E1203 00:07:58.664647 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:07:58 crc kubenswrapper[3561]: E1203 00:07:58.664708 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:07:58 crc kubenswrapper[3561]: E1203 00:07:58.664771 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:07:58 crc kubenswrapper[3561]: E1203 00:07:58.664852 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:07:58 crc kubenswrapper[3561]: I1203 00:07:58.735475 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:58 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:58 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:58 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:58 crc kubenswrapper[3561]: I1203 00:07:58.735619 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.056811 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" probeResult="failure" output="" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.663873 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664023 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664045 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664117 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664116 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664188 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664215 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664221 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664269 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664297 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664309 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664218 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664361 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664381 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664380 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664404 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664448 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664478 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664490 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664381 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664529 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664575 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664476 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664620 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664427 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664637 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664714 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664477 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664773 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664778 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.664584 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664855 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664418 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664869 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.664757 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.665064 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.665247 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.665472 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.665613 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.665725 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.665755 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.665876 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.666118 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.666226 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.666407 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.666515 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.666702 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.666829 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.666967 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.667088 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.667200 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.667269 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.667412 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.667490 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.667590 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.667807 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.667891 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.667894 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.667978 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.667987 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.667981 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.668054 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.668138 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.668261 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.668356 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.668475 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.668614 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.668823 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.668993 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.669135 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.669156 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.669209 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.669323 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:07:59 crc kubenswrapper[3561]: E1203 00:07:59.669387 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.734866 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:07:59 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:07:59 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:07:59 crc kubenswrapper[3561]: healthz check failed Dec 03 00:07:59 crc kubenswrapper[3561]: I1203 00:07:59.735052 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:00 crc kubenswrapper[3561]: I1203 00:08:00.664576 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:08:00 crc kubenswrapper[3561]: I1203 00:08:00.664666 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:08:00 crc kubenswrapper[3561]: I1203 00:08:00.664681 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:08:00 crc kubenswrapper[3561]: I1203 00:08:00.664757 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:08:00 crc kubenswrapper[3561]: I1203 00:08:00.664791 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:08:00 crc kubenswrapper[3561]: I1203 00:08:00.664592 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:08:00 crc kubenswrapper[3561]: I1203 00:08:00.664760 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:08:00 crc kubenswrapper[3561]: I1203 00:08:00.664671 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:00 crc kubenswrapper[3561]: I1203 00:08:00.664899 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:08:00 crc kubenswrapper[3561]: I1203 00:08:00.664589 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:08:00 crc kubenswrapper[3561]: E1203 00:08:00.665039 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:08:00 crc kubenswrapper[3561]: E1203 00:08:00.665175 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:08:00 crc kubenswrapper[3561]: E1203 00:08:00.665919 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:08:00 crc kubenswrapper[3561]: E1203 00:08:00.666300 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:08:00 crc kubenswrapper[3561]: E1203 00:08:00.666502 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:08:00 crc kubenswrapper[3561]: E1203 00:08:00.666762 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:08:00 crc kubenswrapper[3561]: E1203 00:08:00.673430 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:08:00 crc kubenswrapper[3561]: E1203 00:08:00.673637 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:08:00 crc kubenswrapper[3561]: E1203 00:08:00.673924 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:08:00 crc kubenswrapper[3561]: E1203 00:08:00.674028 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:08:00 crc kubenswrapper[3561]: I1203 00:08:00.735767 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:00 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:00 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:00 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:00 crc kubenswrapper[3561]: I1203 00:08:00.735877 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.664376 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.664426 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.664468 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.664512 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.668059 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.668092 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.668251 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.668346 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.668440 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.668461 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.668494 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.668647 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.668713 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.668744 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.668801 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.668839 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.669022 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.669149 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.669244 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.669151 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.669295 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.669320 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.669403 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.669469 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.669621 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.669695 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.669716 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.669713 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.669786 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.669799 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.669625 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.669874 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.669753 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.669974 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.670097 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.670227 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.670305 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.670311 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.670422 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.670431 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.670478 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.670331 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.670516 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.670725 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.670657 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.670783 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.670939 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.670989 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.671084 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.671134 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.671174 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.671273 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.671324 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.671348 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.671384 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.671576 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.671625 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.671709 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.671985 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.672201 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.672316 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.672386 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.672832 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.672980 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.673232 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.673290 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.673433 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.673598 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.673700 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.673823 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.673921 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.674105 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.674214 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.674590 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:08:01 crc kubenswrapper[3561]: E1203 00:08:01.695020 3561 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.735859 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:01 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:01 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:01 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:01 crc kubenswrapper[3561]: I1203 00:08:01.735982 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:02 crc kubenswrapper[3561]: I1203 00:08:02.663644 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:08:02 crc kubenswrapper[3561]: I1203 00:08:02.663694 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:08:02 crc kubenswrapper[3561]: I1203 00:08:02.663726 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:08:02 crc kubenswrapper[3561]: I1203 00:08:02.663775 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:08:02 crc kubenswrapper[3561]: I1203 00:08:02.663644 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:08:02 crc kubenswrapper[3561]: I1203 00:08:02.663788 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:08:02 crc kubenswrapper[3561]: I1203 00:08:02.663830 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:02 crc kubenswrapper[3561]: I1203 00:08:02.663978 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:08:02 crc kubenswrapper[3561]: E1203 00:08:02.663993 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:08:02 crc kubenswrapper[3561]: E1203 00:08:02.664176 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:08:02 crc kubenswrapper[3561]: I1203 00:08:02.664256 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:08:02 crc kubenswrapper[3561]: E1203 00:08:02.664469 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:08:02 crc kubenswrapper[3561]: I1203 00:08:02.664590 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:08:02 crc kubenswrapper[3561]: E1203 00:08:02.664723 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:08:02 crc kubenswrapper[3561]: E1203 00:08:02.664916 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:08:02 crc kubenswrapper[3561]: E1203 00:08:02.665061 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:08:02 crc kubenswrapper[3561]: E1203 00:08:02.665198 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:08:02 crc kubenswrapper[3561]: E1203 00:08:02.665405 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:08:02 crc kubenswrapper[3561]: E1203 00:08:02.665519 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:08:02 crc kubenswrapper[3561]: E1203 00:08:02.665633 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:08:02 crc kubenswrapper[3561]: I1203 00:08:02.735661 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:02 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:02 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:02 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:02 crc kubenswrapper[3561]: I1203 00:08:02.735749 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.663690 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.663814 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.663960 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.664080 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.664676 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.664699 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.665964 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.664793 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.664830 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.664811 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.666171 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.664910 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.664958 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665009 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.666341 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665045 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665079 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665110 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.666495 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665143 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665179 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.666626 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665213 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665263 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.666729 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665302 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665300 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665337 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665351 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.666919 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665378 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665380 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665386 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665406 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665411 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.667062 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665414 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665472 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665482 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665488 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665508 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.667227 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665532 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665578 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665590 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665614 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665644 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.665663 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.667416 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.667521 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.667769 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.668101 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.668227 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.668447 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.668732 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.668835 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.668908 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.669015 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.669217 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.669385 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.669650 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.669845 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.670000 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.670036 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.670097 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.670179 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.670366 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.670457 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.670687 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.670893 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.671055 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.671242 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.671403 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:08:03 crc kubenswrapper[3561]: E1203 00:08:03.671598 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.735916 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:03 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:03 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:03 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:03 crc kubenswrapper[3561]: I1203 00:08:03.736013 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:04 crc kubenswrapper[3561]: I1203 00:08:04.663834 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:08:04 crc kubenswrapper[3561]: I1203 00:08:04.663921 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:08:04 crc kubenswrapper[3561]: I1203 00:08:04.663944 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:08:04 crc kubenswrapper[3561]: E1203 00:08:04.664084 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:08:04 crc kubenswrapper[3561]: I1203 00:08:04.664133 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:04 crc kubenswrapper[3561]: I1203 00:08:04.664163 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:08:04 crc kubenswrapper[3561]: I1203 00:08:04.664317 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:08:04 crc kubenswrapper[3561]: E1203 00:08:04.664393 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:08:04 crc kubenswrapper[3561]: I1203 00:08:04.664409 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:08:04 crc kubenswrapper[3561]: I1203 00:08:04.664443 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:08:04 crc kubenswrapper[3561]: E1203 00:08:04.664658 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:08:04 crc kubenswrapper[3561]: E1203 00:08:04.664801 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:08:04 crc kubenswrapper[3561]: I1203 00:08:04.664865 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:08:04 crc kubenswrapper[3561]: E1203 00:08:04.664969 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:08:04 crc kubenswrapper[3561]: E1203 00:08:04.665152 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:08:04 crc kubenswrapper[3561]: I1203 00:08:04.665228 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:08:04 crc kubenswrapper[3561]: E1203 00:08:04.665261 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:08:04 crc kubenswrapper[3561]: E1203 00:08:04.665356 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:08:04 crc kubenswrapper[3561]: E1203 00:08:04.665503 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:08:04 crc kubenswrapper[3561]: E1203 00:08:04.665737 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:08:04 crc kubenswrapper[3561]: I1203 00:08:04.735716 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:04 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:04 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:04 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:04 crc kubenswrapper[3561]: I1203 00:08:04.735816 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.126132 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.126218 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.126311 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.126415 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.126415 3561 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.126470 3561 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.126482 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.126591 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.126522701 +0000 UTC m=+147.906956999 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.126623 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.126610464 +0000 UTC m=+147.907044752 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.126643 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.126703 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.126733 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.126709347 +0000 UTC m=+147.907143645 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.126760 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.126747118 +0000 UTC m=+147.907181416 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.126914 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.127002 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.127078 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127099 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.127151 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127158 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.12713772 +0000 UTC m=+147.907572018 (durationBeforeRetry 1m4s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127226 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.127203923 +0000 UTC m=+147.907638221 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127236 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.127271 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127274 3561 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127286 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.127272824 +0000 UTC m=+147.907707122 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127354 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.127421 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127439 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.127420619 +0000 UTC m=+147.907854907 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127467 3561 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.127484 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127521 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.127507491 +0000 UTC m=+147.907941789 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127578 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.127565613 +0000 UTC m=+147.907999901 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127604 3561 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.127655 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127662 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.127644986 +0000 UTC m=+147.908079274 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127719 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.127719 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127777 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127769 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.127756549 +0000 UTC m=+147.908190837 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127838 3561 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127869 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.127851572 +0000 UTC m=+147.908285870 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127871 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.127844 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127896 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.127882603 +0000 UTC m=+147.908316891 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.127921 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.127907794 +0000 UTC m=+147.908342082 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.128025 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.128071 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.128118 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.128182 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.128240 3561 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.128251 3561 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.128244 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.128300 3561 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.128249 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.128232315 +0000 UTC m=+147.908666603 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.128344 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.128327168 +0000 UTC m=+147.908761456 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.128400 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.12838658 +0000 UTC m=+147.908820878 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.128469 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.128627 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.128601126 +0000 UTC m=+147.909035434 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.128602 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.128681 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.128687 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.128673539 +0000 UTC m=+147.909107827 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.128751 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.128807 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.128788772 +0000 UTC m=+147.909223210 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.128809 3561 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.128751 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.128898 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.128962 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.129031 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.129000209 +0000 UTC m=+147.909434497 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.129073 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.129052901 +0000 UTC m=+147.909487199 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.129187 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.129281 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.129258148 +0000 UTC m=+147.909692496 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.129328 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.129414 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.129516 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.129638 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.129656 3561 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.129708 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.129688791 +0000 UTC m=+147.910123079 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.129750 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.129730643 +0000 UTC m=+147.910164991 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.129845 3561 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.129940 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.129914679 +0000 UTC m=+147.910348987 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.129716 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.130103 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.130210 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130250 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.130296 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130315 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.13029234 +0000 UTC m=+147.910726638 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130318 3561 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.130386 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130388 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.130368772 +0000 UTC m=+147.910803060 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.130436 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130500 3561 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.130527 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130576 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.130560678 +0000 UTC m=+147.910994976 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130613 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130631 3561 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130662 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.130651861 +0000 UTC m=+147.911086209 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.130617 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130684 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.130670592 +0000 UTC m=+147.911104890 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130683 3561 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130708 3561 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.130730 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130744 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.130733604 +0000 UTC m=+147.911167892 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130815 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130834 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130845 3561 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.130781 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130878 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.130867988 +0000 UTC m=+147.911302396 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130910 3561 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.130980 3561 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.131017 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.131006983 +0000 UTC m=+147.911441401 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.131138 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.131153 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.131133137 +0000 UTC m=+147.911567425 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.131196 3561 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.131228 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.131219229 +0000 UTC m=+147.911653627 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.131277 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.131328 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.131380 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.131438 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.131416746 +0000 UTC m=+147.911851034 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.131471 3561 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.131534 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.131517909 +0000 UTC m=+147.911952197 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.131885 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.131942 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.131990 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.132070 3561 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.132104 3561 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.132126 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.132076 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.132142 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.132129 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.132114268 +0000 UTC m=+147.912548566 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.132201 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.132225 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.132271 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.132261643 +0000 UTC m=+147.912696021 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.132435 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.132416238 +0000 UTC m=+147.912850526 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.132479 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.13246545 +0000 UTC m=+147.912899738 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.132509 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.132496461 +0000 UTC m=+147.912930749 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.133213 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.133269 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.133316 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.133441 3561 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.133457 3561 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.133472 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.133482 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.133469091 +0000 UTC m=+147.913903439 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.133651 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.133615865 +0000 UTC m=+147.914050253 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.133687 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.133667897 +0000 UTC m=+147.914102285 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.235139 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.235723 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.236059 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.236339 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.236533 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.236820 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.237077 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.237314 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.237428 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.237195 3561 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.237606 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.237567296 +0000 UTC m=+148.018001594 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.236918 3561 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.237695 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.237657089 +0000 UTC m=+148.018091387 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.237768 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.237736091 +0000 UTC m=+148.018170379 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.236647 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.237840 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.237820904 +0000 UTC m=+148.018255272 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.236422 3561 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.237962 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.237936618 +0000 UTC m=+148.018370906 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.236167 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.235432 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.238032 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.23801176 +0000 UTC m=+148.018446048 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.238129 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.238105283 +0000 UTC m=+148.018539641 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.238680 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.238747 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.238831 3561 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.238885 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.238900 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.238884368 +0000 UTC m=+148.019318656 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.239033 3561 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.235831 3561 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.239122 3561 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.239126 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.239103555 +0000 UTC m=+148.019537913 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.239229 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.239202407 +0000 UTC m=+148.019636745 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.239256 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.239338 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.239318581 +0000 UTC m=+148.019752869 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.239398 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.239385583 +0000 UTC m=+148.019819881 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.240121 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.240425 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.240485 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.240590 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.240590 3561 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.240649 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.240682 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.240654814 +0000 UTC m=+148.021089112 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.240777 3561 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.240820 3561 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.240895 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.240867001 +0000 UTC m=+148.021301289 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.240927 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.240912402 +0000 UTC m=+148.021346690 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.240978 3561 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.241059 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.241124 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.241164 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.241127499 +0000 UTC m=+148.021561797 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.241213 3561 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.241259 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.241245703 +0000 UTC m=+148.021679991 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.241267 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.241334 3561 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.241365 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.241377 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.241365827 +0000 UTC m=+148.021800115 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.241443 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.241488 3561 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.241511 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.241621 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.241654 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.241674 3561 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.241703 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.241728 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.241695337 +0000 UTC m=+148.022129635 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.241746 3561 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.241783 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.24176505 +0000 UTC m=+148.022199348 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.241811 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.241798751 +0000 UTC m=+148.022233049 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.241832 3561 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.241896 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.241919 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.241892584 +0000 UTC m=+148.022326902 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.241968 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.241997 3561 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.242018 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242052 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.242017907 +0000 UTC m=+148.022452205 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242098 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.242078479 +0000 UTC m=+148.022512917 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242108 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242106 3561 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242150 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.242137521 +0000 UTC m=+148.022571819 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242175 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.242163231 +0000 UTC m=+148.022597519 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242184 3561 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.242216 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242244 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.242226793 +0000 UTC m=+148.022661091 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.242316 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242346 3561 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.242368 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.242418 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242427 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.242404949 +0000 UTC m=+148.022839327 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.242514 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242534 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.242609 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242611 3561 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.242698 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242704 3561 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242730 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242741 3561 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242722 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.242672348 +0000 UTC m=+148.023106686 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242780 3561 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.242825 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242838 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.242824153 +0000 UTC m=+148.023258441 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.242877 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242893 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.242879374 +0000 UTC m=+148.023313672 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242915 3561 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242944 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.242927516 +0000 UTC m=+148.023361814 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242967 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.242955597 +0000 UTC m=+148.023389885 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242978 3561 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.242991 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.242979138 +0000 UTC m=+148.023413426 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.243034 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.243059 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.243028989 +0000 UTC m=+148.023463347 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.243154 3561 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.243175 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.243210 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.243196045 +0000 UTC m=+148.023630343 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.243252 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.243235856 +0000 UTC m=+148.023670144 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.243337 3561 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.243520 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.243494614 +0000 UTC m=+148.023929002 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.244397 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.244469 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.244529 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.244627 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.244668 3561 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.244688 3561 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.244747 3561 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.244674 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.244782 3561 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.244754 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.244726634 +0000 UTC m=+148.025160972 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.244821 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.244807045 +0000 UTC m=+148.025241343 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.244823 3561 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.244844 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.244833036 +0000 UTC m=+148.025267324 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.244868 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.244857667 +0000 UTC m=+148.025291955 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.244915 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.244888998 +0000 UTC m=+148.025323356 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.245026 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.245130 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245139 3561 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245189 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.245175737 +0000 UTC m=+148.025610025 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.245232 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245263 3561 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.245282 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245332 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.245313462 +0000 UTC m=+148.025747880 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245366 3561 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.245398 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245410 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.245396714 +0000 UTC m=+148.025831012 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.245454 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245470 3561 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245508 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.245497437 +0000 UTC m=+148.025931725 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.245526 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245624 3561 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.245641 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245667 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.245655123 +0000 UTC m=+148.026089421 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.245709 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245740 3561 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245809 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.245786447 +0000 UTC m=+148.026220825 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245825 3561 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245867 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.245854599 +0000 UTC m=+148.026288887 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245898 3561 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245933 3561 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245961 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.245942482 +0000 UTC m=+148.026376880 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.245987 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.245975223 +0000 UTC m=+148.026409511 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.349911 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.350027 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.350083 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.350577 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.350598 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.350612 3561 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.350673 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.350656467 +0000 UTC m=+148.131090735 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.350998 3561 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.351106 3561 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.351121 3561 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.351208 3561 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.351119 3561 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.351269 3561 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.351323 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.351285386 +0000 UTC m=+148.131719644 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.351404 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.351349558 +0000 UTC m=+148.131783846 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.453989 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.454174 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.454270 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.454317 3561 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.454385 3561 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.454418 3561 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.454531 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.454653 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.454662 3561 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.454682 3561 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.454710 3561 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.454740 3561 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.454840 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.454803252 +0000 UTC m=+148.235237550 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.454907 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.454883665 +0000 UTC m=+148.235318073 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.455122 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.455076671 +0000 UTC m=+148.235510959 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.557716 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.557813 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.557965 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558015 3561 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558082 3561 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558088 3561 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558119 3561 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558167 3561 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558192 3561 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558110 3561 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558141 3561 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558284 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.558255588 +0000 UTC m=+148.338689886 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558291 3561 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.558193 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558325 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.558299409 +0000 UTC m=+148.338733707 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558352 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.55833997 +0000 UTC m=+148.338774268 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558423 3561 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558454 3561 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558474 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.558592 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558668 3561 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558690 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.558868 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.558801685 +0000 UTC m=+148.339235983 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.559113 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.559096304 +0000 UTC m=+148.339530582 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.663529 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.663795 3561 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.663845 3561 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.663858 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.663866 3561 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.664006 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.664012 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.664067 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.664098 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.664129 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.664200 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.664224 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.664258 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.664269 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.664261 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.664233 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.664346 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.664471 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.664604 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.664622 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.664605 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.664737 3561 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.664783 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.664798 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.664788 3561 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.664850 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.664851 3561 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.665038 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.665719 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.665039 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.665805 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.665840 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.664934 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.665849 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.665083 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.665046528 +0000 UTC m=+148.445480816 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.665918 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.665946 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.665920156 +0000 UTC m=+148.446354454 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.666022 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.665102 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.665146 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.665177 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.665289 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.666171 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.665362 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.666247 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.666273 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.665389 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.665469 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.665473 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.665490 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.666585 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.665529 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.666769 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.665609 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.665644 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.666822 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.665768 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.665810 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.665076 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.666965 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.667076 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.667114 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.667223 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.667344 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.667522 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.667683 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.667773 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.667910 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.668015 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.668135 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.668448 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.668480 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.668818 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.668975 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.669165 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.669229 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.669285 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.669302 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.669417 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.669570 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.669707 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.669825 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.669922 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.670102 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.735674 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:05 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:05 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:05 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.735801 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.768062 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.768126 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.768167 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.768269 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.768306 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.768378 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.768438 3561 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.768488 3561 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.768519 3561 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.768520 3561 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.768600 3561 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.768621 3561 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.768655 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.768622026 +0000 UTC m=+148.549056324 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.768698 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.768686 3561 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.768761 3561 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.768787 3561 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.768813 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.768713 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.768682948 +0000 UTC m=+148.549117246 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.768521 3561 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.768916 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.768719 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.769024 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.768999868 +0000 UTC m=+148.549434156 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.769033 3561 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.768841 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.769091 3561 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.769176 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.769130603 +0000 UTC m=+148.549564911 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.769229 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.769206595 +0000 UTC m=+148.549641103 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.769495 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.769426472 +0000 UTC m=+148.549860760 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.977646 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.977892 3561 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.977948 3561 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.977975 3561 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.978047 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.978083 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.978043936 +0000 UTC m=+148.758478244 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.978086 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.978121 3561 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.977912 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.978202 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.97817738 +0000 UTC m=+148.758611688 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.978495 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.978678 3561 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.978710 3561 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.978724 3561 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.978778 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.978759708 +0000 UTC m=+148.759193986 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.978822 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:08:05 crc kubenswrapper[3561]: I1203 00:08:05.978908 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.979036 3561 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.979091 3561 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.979117 3561 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.979210 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.979183242 +0000 UTC m=+148.759617550 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.979226 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.979294 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.979315 3561 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:08:05 crc kubenswrapper[3561]: E1203 00:08:05.979412 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:09.979385558 +0000 UTC m=+148.759819856 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.083942 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.084052 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.084199 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.084201 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.084246 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.084264 3561 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.084284 3561 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.084316 3561 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.084355 3561 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.084374 3561 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.084436 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:10.084397503 +0000 UTC m=+148.864831791 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.084475 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:10.084460115 +0000 UTC m=+148.864894413 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.084588 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.084646 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.084664 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.084941 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-12-03 00:09:10.084901398 +0000 UTC m=+148.865335686 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.084667 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.085036 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.085056 3561 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.085247 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:10.085193667 +0000 UTC m=+148.865627955 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.186444 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.186588 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.186829 3561 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.186884 3561 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.186904 3561 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.186935 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.187003 3561 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.187033 3561 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.187129 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-03 00:09:10.187102023 +0000 UTC m=+148.967536321 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.187321 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-03 00:09:10.187291979 +0000 UTC m=+148.967726277 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.293014 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.293191 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.293190 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.293223 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.293238 3561 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.293511 3561 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.293578 3561 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.293599 3561 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.293592 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-12-03 00:09:10.29346579 +0000 UTC m=+149.073900088 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.293825 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:10.293802941 +0000 UTC m=+149.074237199 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.397201 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.397283 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.397449 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.397515 3561 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.397590 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.397607 3561 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.397629 3561 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.397648 3561 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.397653 3561 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.397602 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.397709 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:10.39769099 +0000 UTC m=+149.178125248 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.397747 3561 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.397765 3561 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.397767 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:10.397731261 +0000 UTC m=+149.178165559 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.397796 3561 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.397801 3561 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.397822 3561 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.397829 3561 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.397903 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-03 00:09:10.397881726 +0000 UTC m=+149.178316014 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.398113 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:10.398086672 +0000 UTC m=+149.178520960 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.501857 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.502161 3561 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.502232 3561 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.502254 3561 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.502610 3561 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.502662 3561 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.503063 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.503195 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:10.503163068 +0000 UTC m=+149.283597366 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.503252 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:10.50323682 +0000 UTC m=+149.283671108 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.503707 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.503925 3561 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.504151 3561 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.504345 3561 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.504671 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-03 00:09:10.504639264 +0000 UTC m=+149.285073562 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.663381 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.663520 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.663628 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.663648 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.663692 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.663727 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.663824 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.663939 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.664029 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.664059 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.664089 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.664218 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.664340 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.664511 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.664727 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.664812 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.664948 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.665068 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.665278 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.665318 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:08:06 crc kubenswrapper[3561]: E1203 00:08:06.696514 3561 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.735307 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:06 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:06 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:06 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:06 crc kubenswrapper[3561]: I1203 00:08:06.735398 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664075 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664220 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664235 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664251 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664275 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664269 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664300 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664301 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664301 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664338 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664343 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664353 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664369 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664382 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664408 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664399 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664406 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664434 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664447 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664460 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664480 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664483 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664483 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664509 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664520 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664537 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664576 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664590 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664618 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664629 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664635 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664655 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664666 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664689 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664794 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664812 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.664819 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.668042 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.668536 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.668945 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.669119 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.669263 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.669455 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.669761 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.669652 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.669919 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.670037 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.670163 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.670313 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.670433 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.670589 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.670711 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.670830 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.670930 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.671065 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.671181 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.671318 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.671410 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.671492 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.671632 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.671811 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.671897 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.672000 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.672104 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.672266 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.672306 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.672411 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.672522 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.672608 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.672689 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.672782 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.672865 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.672978 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:08:07 crc kubenswrapper[3561]: E1203 00:08:07.673131 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.736655 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:07 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:07 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:07 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:07 crc kubenswrapper[3561]: I1203 00:08:07.736770 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:08 crc kubenswrapper[3561]: I1203 00:08:08.663948 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:08:08 crc kubenswrapper[3561]: I1203 00:08:08.664013 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:08:08 crc kubenswrapper[3561]: E1203 00:08:08.664154 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:08:08 crc kubenswrapper[3561]: I1203 00:08:08.664161 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:08:08 crc kubenswrapper[3561]: I1203 00:08:08.664267 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:08:08 crc kubenswrapper[3561]: E1203 00:08:08.664415 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:08:08 crc kubenswrapper[3561]: I1203 00:08:08.664486 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:08:08 crc kubenswrapper[3561]: I1203 00:08:08.664520 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:08:08 crc kubenswrapper[3561]: I1203 00:08:08.664590 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:08 crc kubenswrapper[3561]: I1203 00:08:08.665037 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:08:08 crc kubenswrapper[3561]: E1203 00:08:08.665055 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:08:08 crc kubenswrapper[3561]: E1203 00:08:08.665402 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:08:08 crc kubenswrapper[3561]: I1203 00:08:08.665435 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:08:08 crc kubenswrapper[3561]: E1203 00:08:08.665674 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:08:08 crc kubenswrapper[3561]: I1203 00:08:08.665703 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:08:08 crc kubenswrapper[3561]: E1203 00:08:08.665936 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:08:08 crc kubenswrapper[3561]: E1203 00:08:08.666045 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:08:08 crc kubenswrapper[3561]: E1203 00:08:08.666265 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:08:08 crc kubenswrapper[3561]: E1203 00:08:08.666425 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:08:08 crc kubenswrapper[3561]: E1203 00:08:08.666813 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:08:08 crc kubenswrapper[3561]: I1203 00:08:08.736202 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:08 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:08 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:08 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:08 crc kubenswrapper[3561]: I1203 00:08:08.736321 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.664022 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.664126 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.664279 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.664285 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.664418 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.664643 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.664643 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.664751 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.664761 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.664881 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.664978 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665004 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665021 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665049 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.664903 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665112 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665125 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665132 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665196 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665204 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.665111 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.665307 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665406 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.665588 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665676 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665689 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665683 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665720 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665762 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665767 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665803 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665818 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665829 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665747 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665877 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665883 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665915 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665924 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665932 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665923 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.665980 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.666016 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.666023 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.666311 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.666702 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.667039 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.667105 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.667269 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.667371 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.667496 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.667644 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.667758 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.667866 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.668032 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.668207 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.668317 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.668422 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.668534 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.668690 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.668810 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.668928 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.669010 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.669130 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.669294 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.669392 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.669474 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.669598 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.669681 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.669858 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.670071 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.670118 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.670291 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.670471 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:08:09 crc kubenswrapper[3561]: E1203 00:08:09.672025 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.735404 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:09 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:09 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:09 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:09 crc kubenswrapper[3561]: I1203 00:08:09.735522 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:10 crc kubenswrapper[3561]: I1203 00:08:10.663991 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:08:10 crc kubenswrapper[3561]: I1203 00:08:10.664057 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:08:10 crc kubenswrapper[3561]: I1203 00:08:10.664121 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:08:10 crc kubenswrapper[3561]: I1203 00:08:10.664162 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:10 crc kubenswrapper[3561]: I1203 00:08:10.664121 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:08:10 crc kubenswrapper[3561]: I1203 00:08:10.664231 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:08:10 crc kubenswrapper[3561]: I1203 00:08:10.664326 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:08:10 crc kubenswrapper[3561]: I1203 00:08:10.664329 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:08:10 crc kubenswrapper[3561]: I1203 00:08:10.664349 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:08:10 crc kubenswrapper[3561]: E1203 00:08:10.664605 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 03 00:08:10 crc kubenswrapper[3561]: E1203 00:08:10.664678 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 03 00:08:10 crc kubenswrapper[3561]: I1203 00:08:10.664884 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:08:10 crc kubenswrapper[3561]: E1203 00:08:10.664967 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:08:10 crc kubenswrapper[3561]: E1203 00:08:10.665110 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 03 00:08:10 crc kubenswrapper[3561]: E1203 00:08:10.665268 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 03 00:08:10 crc kubenswrapper[3561]: E1203 00:08:10.665423 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 03 00:08:10 crc kubenswrapper[3561]: E1203 00:08:10.665583 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 03 00:08:10 crc kubenswrapper[3561]: E1203 00:08:10.665654 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 03 00:08:10 crc kubenswrapper[3561]: E1203 00:08:10.665717 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 03 00:08:10 crc kubenswrapper[3561]: E1203 00:08:10.665795 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 03 00:08:10 crc kubenswrapper[3561]: I1203 00:08:10.736794 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:10 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:10 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:10 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:10 crc kubenswrapper[3561]: I1203 00:08:10.736921 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.663704 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.663762 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.663877 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.664041 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.664094 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.664144 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.664250 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.664336 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.664383 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.664487 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.664532 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.664603 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.664736 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.664830 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.664929 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.665043 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.665101 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.665128 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.665201 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.665254 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.665280 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.665336 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.665369 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.665424 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.665427 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.665471 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.665530 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.665726 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.667374 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.667461 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.667486 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.667514 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.667560 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.667757 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.667780 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.667884 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.667959 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.668131 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.668168 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.668206 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.668215 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.668248 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.668252 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.668285 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.668256 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.668301 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.668335 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.668406 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.668562 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.668611 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.668677 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.668736 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.668741 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.668848 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.668910 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.668983 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.669015 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.669055 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.669190 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.669404 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.669465 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.669641 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.669822 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.670014 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.670208 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.670264 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.670382 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.670523 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.670739 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.670963 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.671117 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.671305 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.671327 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 03 00:08:11 crc kubenswrapper[3561]: E1203 00:08:11.671385 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.735434 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:11 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:11 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:11 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:11 crc kubenswrapper[3561]: I1203 00:08:11.735567 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.664230 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.664437 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.664606 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.664845 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.665575 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.665786 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.666646 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.669703 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.670106 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.672498 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.673135 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.678221 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.680220 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.687736 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.687839 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.687991 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.688780 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.688929 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.689035 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.689048 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.689084 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.689110 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.689307 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.689340 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-79vsd" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.689379 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.689415 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6sd5l" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.689916 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.690019 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.690271 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.690275 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.690396 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.690397 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.691970 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.693117 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.693203 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.693239 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.693679 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.694293 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.698972 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.699322 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.701175 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.702848 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.702980 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.704139 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.705610 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.713862 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.715135 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.718190 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.720809 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.735084 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:12 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:12 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:12 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:12 crc kubenswrapper[3561]: I1203 00:08:12.735479 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.671347 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.671372 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.671453 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.671474 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.671731 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.671736 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.671879 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.672072 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.672080 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.672331 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.672790 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.672981 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.672986 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.673119 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.673131 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.673192 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.673131 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.673289 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.673391 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.673531 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.673581 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.673732 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.673746 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.673859 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.674169 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.674392 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.674649 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.674762 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.675379 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.675447 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.671386 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.676472 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.681060 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.684497 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.685678 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.686056 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.686112 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.686232 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.686463 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.686472 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.686611 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.686671 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.686698 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.686763 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.686864 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.686879 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.686906 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.686937 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.687019 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.687113 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.687116 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.687236 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.687673 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.687727 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.687762 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.687789 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.687842 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.687911 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.687731 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.688016 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.688144 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.688225 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.688286 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.688335 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.688293 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.688391 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.688496 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.688573 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-ng44q" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.688656 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.688722 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.688754 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.688772 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.688987 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.689049 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.689132 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.690015 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.690149 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.690284 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.690651 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.690969 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.691007 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.691140 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.691492 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.691710 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.691861 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.692042 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.692271 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.695862 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.696603 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.699815 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.701012 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.702231 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.709766 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.710233 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.710716 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.711131 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.711267 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.711148 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.711500 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.711647 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.711666 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.711808 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.711888 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.711812 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.714709 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.714953 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.715169 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.715404 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.715439 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.715703 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.715807 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.715899 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.716044 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.716063 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.716193 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.716265 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.716323 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.716470 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.716499 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.716709 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.716732 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.716859 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.716911 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.716988 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.717118 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.717176 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.717416 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.717658 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.717729 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.718028 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.718174 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.718350 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.718631 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.720283 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.720579 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.725421 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.736001 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.737875 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:13 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:13 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:13 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.738055 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.740058 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.744032 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.747365 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.756099 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.775008 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.795411 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.826464 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.835321 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.854431 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.874499 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.895702 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.915138 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.935180 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.955654 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.974721 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 00:08:13 crc kubenswrapper[3561]: I1203 00:08:13.995850 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 00:08:14 crc kubenswrapper[3561]: I1203 00:08:14.015474 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 00:08:14 crc kubenswrapper[3561]: I1203 00:08:14.040624 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 00:08:14 crc kubenswrapper[3561]: I1203 00:08:14.054855 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Dec 03 00:08:14 crc kubenswrapper[3561]: I1203 00:08:14.735319 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:14 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:14 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:14 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:14 crc kubenswrapper[3561]: I1203 00:08:14.735708 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:15 crc kubenswrapper[3561]: I1203 00:08:15.734740 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:15 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:15 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:15 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:15 crc kubenswrapper[3561]: I1203 00:08:15.734841 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:16 crc kubenswrapper[3561]: I1203 00:08:16.736764 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:16 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:16 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:16 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:16 crc kubenswrapper[3561]: I1203 00:08:16.736876 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:17 crc kubenswrapper[3561]: I1203 00:08:17.735309 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:17 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:17 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:17 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:17 crc kubenswrapper[3561]: I1203 00:08:17.735452 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:18 crc kubenswrapper[3561]: I1203 00:08:18.736603 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:18 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:18 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:18 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:18 crc kubenswrapper[3561]: I1203 00:08:18.736725 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:19 crc kubenswrapper[3561]: I1203 00:08:19.464902 3561 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeReady" Dec 03 00:08:19 crc kubenswrapper[3561]: I1203 00:08:19.748408 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:19 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:19 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:19 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:19 crc kubenswrapper[3561]: I1203 00:08:19.748581 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:20 crc kubenswrapper[3561]: I1203 00:08:20.736020 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:20 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:20 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:20 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:20 crc kubenswrapper[3561]: I1203 00:08:20.736114 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:21 crc kubenswrapper[3561]: I1203 00:08:21.736147 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:21 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:21 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:21 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:21 crc kubenswrapper[3561]: I1203 00:08:21.737592 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:22 crc kubenswrapper[3561]: I1203 00:08:22.736023 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:22 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:22 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:22 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:22 crc kubenswrapper[3561]: I1203 00:08:22.736158 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:23 crc kubenswrapper[3561]: I1203 00:08:23.736386 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:23 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:23 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:23 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:23 crc kubenswrapper[3561]: I1203 00:08:23.736499 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:24 crc kubenswrapper[3561]: I1203 00:08:24.735416 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:24 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:24 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:24 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:24 crc kubenswrapper[3561]: I1203 00:08:24.735522 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:25 crc kubenswrapper[3561]: I1203 00:08:25.896802 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:25 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:25 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:25 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:25 crc kubenswrapper[3561]: I1203 00:08:25.896886 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:26 crc kubenswrapper[3561]: I1203 00:08:26.736512 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:26 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:26 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:26 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:26 crc kubenswrapper[3561]: I1203 00:08:26.736671 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:27 crc kubenswrapper[3561]: I1203 00:08:27.736427 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:27 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:27 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:27 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:27 crc kubenswrapper[3561]: I1203 00:08:27.736535 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:28 crc kubenswrapper[3561]: I1203 00:08:28.736472 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:28 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:28 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:28 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:28 crc kubenswrapper[3561]: I1203 00:08:28.736605 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:29 crc kubenswrapper[3561]: I1203 00:08:29.050689 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:08:29 crc kubenswrapper[3561]: I1203 00:08:29.735835 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:29 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:29 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:29 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:29 crc kubenswrapper[3561]: I1203 00:08:29.735924 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:30 crc kubenswrapper[3561]: I1203 00:08:30.735855 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:30 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:30 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:30 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:30 crc kubenswrapper[3561]: I1203 00:08:30.735962 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:31 crc kubenswrapper[3561]: I1203 00:08:31.735183 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:31 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:31 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:31 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:31 crc kubenswrapper[3561]: I1203 00:08:31.735350 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:32 crc kubenswrapper[3561]: I1203 00:08:32.736840 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:32 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:32 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:32 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:32 crc kubenswrapper[3561]: I1203 00:08:32.737036 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:33 crc kubenswrapper[3561]: I1203 00:08:33.735684 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:33 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:33 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:33 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:33 crc kubenswrapper[3561]: I1203 00:08:33.735816 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:34 crc kubenswrapper[3561]: I1203 00:08:34.735813 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:34 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:34 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:34 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:34 crc kubenswrapper[3561]: I1203 00:08:34.735913 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:35 crc kubenswrapper[3561]: I1203 00:08:35.737008 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:35 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:35 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:35 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:35 crc kubenswrapper[3561]: I1203 00:08:35.737135 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:36 crc kubenswrapper[3561]: I1203 00:08:36.736078 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:36 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:36 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:36 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:36 crc kubenswrapper[3561]: I1203 00:08:36.736173 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:37 crc kubenswrapper[3561]: I1203 00:08:37.735626 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:37 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:37 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:37 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:37 crc kubenswrapper[3561]: I1203 00:08:37.735765 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:38 crc kubenswrapper[3561]: I1203 00:08:38.740220 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:38 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:38 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:38 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:38 crc kubenswrapper[3561]: I1203 00:08:38.740357 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:39 crc kubenswrapper[3561]: I1203 00:08:39.735857 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:39 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:39 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:39 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:39 crc kubenswrapper[3561]: I1203 00:08:39.736135 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:40 crc kubenswrapper[3561]: I1203 00:08:40.735517 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:40 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:40 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:40 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:40 crc kubenswrapper[3561]: I1203 00:08:40.735667 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:41 crc kubenswrapper[3561]: I1203 00:08:41.500751 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:08:41 crc kubenswrapper[3561]: I1203 00:08:41.500832 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:08:41 crc kubenswrapper[3561]: I1203 00:08:41.500863 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:08:41 crc kubenswrapper[3561]: I1203 00:08:41.500905 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:08:41 crc kubenswrapper[3561]: I1203 00:08:41.500943 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:08:41 crc kubenswrapper[3561]: I1203 00:08:41.736979 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:41 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:41 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:41 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:41 crc kubenswrapper[3561]: I1203 00:08:41.738204 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:42 crc kubenswrapper[3561]: I1203 00:08:42.735714 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:42 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:42 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:42 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:42 crc kubenswrapper[3561]: I1203 00:08:42.736289 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:43 crc kubenswrapper[3561]: I1203 00:08:43.736828 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:43 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:43 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:43 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:43 crc kubenswrapper[3561]: I1203 00:08:43.736995 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:44 crc kubenswrapper[3561]: I1203 00:08:44.735425 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:44 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:44 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:44 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:44 crc kubenswrapper[3561]: I1203 00:08:44.735495 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:45 crc kubenswrapper[3561]: I1203 00:08:45.736339 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:45 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:45 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:45 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:45 crc kubenswrapper[3561]: I1203 00:08:45.736505 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:46 crc kubenswrapper[3561]: I1203 00:08:46.736518 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:46 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:46 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:46 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:46 crc kubenswrapper[3561]: I1203 00:08:46.736715 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:47 crc kubenswrapper[3561]: I1203 00:08:47.736690 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:47 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:47 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:47 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:47 crc kubenswrapper[3561]: I1203 00:08:47.737680 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:48 crc kubenswrapper[3561]: I1203 00:08:48.735646 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:48 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:48 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:48 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:48 crc kubenswrapper[3561]: I1203 00:08:48.735729 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:49 crc kubenswrapper[3561]: I1203 00:08:49.736185 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:49 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:49 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:49 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:49 crc kubenswrapper[3561]: I1203 00:08:49.736308 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:50 crc kubenswrapper[3561]: I1203 00:08:50.736806 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:50 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:50 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:50 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:50 crc kubenswrapper[3561]: I1203 00:08:50.736971 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:51 crc kubenswrapper[3561]: I1203 00:08:51.736254 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:51 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:51 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:51 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:51 crc kubenswrapper[3561]: I1203 00:08:51.736380 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:52 crc kubenswrapper[3561]: I1203 00:08:52.735040 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:52 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:52 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:52 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:52 crc kubenswrapper[3561]: I1203 00:08:52.735121 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:53 crc kubenswrapper[3561]: I1203 00:08:53.735336 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:53 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:53 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:53 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:53 crc kubenswrapper[3561]: I1203 00:08:53.735444 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:54 crc kubenswrapper[3561]: I1203 00:08:54.736975 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:54 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:54 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:54 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:54 crc kubenswrapper[3561]: I1203 00:08:54.737101 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:55 crc kubenswrapper[3561]: I1203 00:08:55.735596 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:55 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:55 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:55 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:55 crc kubenswrapper[3561]: I1203 00:08:55.735739 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:56 crc kubenswrapper[3561]: I1203 00:08:56.747385 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:56 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:56 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:56 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:56 crc kubenswrapper[3561]: I1203 00:08:56.747571 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:57 crc kubenswrapper[3561]: I1203 00:08:57.735792 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:57 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:57 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:57 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:57 crc kubenswrapper[3561]: I1203 00:08:57.735917 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:58 crc kubenswrapper[3561]: I1203 00:08:58.736884 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:58 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:58 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:58 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:58 crc kubenswrapper[3561]: I1203 00:08:58.736986 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:59 crc kubenswrapper[3561]: I1203 00:08:59.736276 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:59 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:08:59 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:08:59 crc kubenswrapper[3561]: healthz check failed Dec 03 00:08:59 crc kubenswrapper[3561]: I1203 00:08:59.736397 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:00 crc kubenswrapper[3561]: I1203 00:09:00.735788 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:00 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:09:00 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:09:00 crc kubenswrapper[3561]: healthz check failed Dec 03 00:09:00 crc kubenswrapper[3561]: I1203 00:09:00.735954 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:01 crc kubenswrapper[3561]: I1203 00:09:01.736131 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:01 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:09:01 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:09:01 crc kubenswrapper[3561]: healthz check failed Dec 03 00:09:01 crc kubenswrapper[3561]: I1203 00:09:01.736262 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:02 crc kubenswrapper[3561]: I1203 00:09:02.735798 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:02 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:09:02 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:09:02 crc kubenswrapper[3561]: healthz check failed Dec 03 00:09:02 crc kubenswrapper[3561]: I1203 00:09:02.736717 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:03 crc kubenswrapper[3561]: I1203 00:09:03.736597 3561 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:03 crc kubenswrapper[3561]: [-]has-synced failed: reason withheld Dec 03 00:09:03 crc kubenswrapper[3561]: [+]process-running ok Dec 03 00:09:03 crc kubenswrapper[3561]: healthz check failed Dec 03 00:09:03 crc kubenswrapper[3561]: I1203 00:09:03.736737 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:03 crc kubenswrapper[3561]: I1203 00:09:03.736817 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:09:03 crc kubenswrapper[3561]: I1203 00:09:03.738738 3561 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"480f0970e9e9dc9b9af0dc4fbf13231ac94f2e6658d265a517c39d1ae9f0323c"} pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" containerMessage="Container router failed startup probe, will be restarted" Dec 03 00:09:03 crc kubenswrapper[3561]: I1203 00:09:03.738814 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" containerID="cri-o://480f0970e9e9dc9b9af0dc4fbf13231ac94f2e6658d265a517c39d1ae9f0323c" gracePeriod=3600 Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187063 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187114 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187137 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187158 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187182 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187204 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187230 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187269 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187307 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187328 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187348 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187375 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187396 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187415 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187438 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187459 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187486 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187507 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187564 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187586 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187605 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187632 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187654 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187687 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187715 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187740 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187769 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187820 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187844 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187868 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187912 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.187950 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.188019 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.188055 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.188077 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.188104 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.188135 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.188193 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.188234 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.189823 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.189924 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.189974 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.190426 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.190467 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.190517 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.190567 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.190617 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: E1203 00:09:09.191662 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:11:11.191634604 +0000 UTC m=+269.972068872 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.202045 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.202074 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.202969 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.203236 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.203511 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.203627 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.203730 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.204010 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.203291 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.204511 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.204790 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.205062 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.205296 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.205440 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.205639 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.205714 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.206054 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.206411 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.206436 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.206719 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.206778 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.206865 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.206750 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.207152 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.207387 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.210718 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.211961 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.212649 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.213481 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.213510 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.214232 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.214752 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.215279 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.215778 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.216987 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.217687 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.217851 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.217891 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.218037 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.218422 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.218706 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.219531 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.219717 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.219728 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.219865 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.221019 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.224720 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.217998 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.224947 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.225096 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.225345 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.226910 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.217953 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.228092 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.234734 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.237277 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.235282 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.237710 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.235503 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.236752 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.237164 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.237907 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.237936 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.238346 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.239050 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.239278 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.242127 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.242531 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.242724 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.242910 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.243954 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.245432 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.245670 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.245941 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.247606 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.247654 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.247978 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.249706 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.257798 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.262909 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.265257 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.265799 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.267223 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.270869 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.272880 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.275929 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.276273 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.282763 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.283256 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.283984 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.285303 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.285835 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.291672 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.292135 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.292343 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.292478 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.292682 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.292829 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.294146 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.294337 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.294344 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.294438 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.294466 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.294503 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.294660 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.294711 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.294743 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.294788 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.294832 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.294925 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.295006 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.295094 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.295158 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.295192 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.295253 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.295328 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.295405 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.295496 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.295504 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.295578 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.295672 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.295746 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.295781 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.295857 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296006 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296093 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296130 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296196 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296259 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296334 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296372 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296401 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296463 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296510 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296578 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296621 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296654 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296694 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296735 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296806 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296865 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296896 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296927 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296966 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.296997 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.297028 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.297052 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.297147 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.297760 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.298214 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.298413 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.298659 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.298736 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.300361 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.300751 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.301076 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.301275 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.301684 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.301999 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.302364 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.302578 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.302749 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.302926 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.303103 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.303426 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.303646 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.305063 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.305115 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.305151 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.305259 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.305355 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.305604 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.305705 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.305738 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.305983 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.306254 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.307965 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.308420 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.308442 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.308516 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.308635 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.308823 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.309132 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.309263 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.309448 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.310220 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.310601 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.310714 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.311514 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.311831 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.311886 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.312657 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.313023 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.313137 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.313235 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.313378 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.313581 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.313861 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.314063 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.314310 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.314458 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.314487 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.314768 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.314859 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.314878 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.314993 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.315083 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.315162 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.315241 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.315315 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.315854 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.316076 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.316195 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.316355 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.316426 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.316595 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.316850 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.317224 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.317517 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.319435 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.321264 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.322093 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.322347 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.322610 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.323123 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.323174 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.324303 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.324395 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.324430 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.325062 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.326165 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.326451 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.326659 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.326773 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.330216 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.330622 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.331694 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.333610 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.351468 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.351578 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.351830 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.351942 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.352517 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.352944 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.353025 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.354063 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.382784 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.397909 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.397984 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.398020 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.401236 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.401349 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.401372 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.411475 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.411526 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.420942 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.423876 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.424853 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.425506 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.430314 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.437769 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.475294 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.475325 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.499147 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.499252 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.499301 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.505945 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.513148 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.513234 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.519072 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.519656 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.523552 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.523565 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.524409 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.529238 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.552886 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.596898 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.597250 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.599890 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.600645 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.601505 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.604867 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.604949 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.604993 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.605029 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.606142 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.606475 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.607129 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.607451 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.610138 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.616783 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.617596 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.618917 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.619550 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.620765 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.632373 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.632740 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.634789 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.639937 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.648929 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.655574 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.705882 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.706101 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.706201 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.708153 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.710012 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.719247 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.733365 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.742275 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.770176 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6sd5l" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.776670 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.792565 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.803095 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.857849 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.857891 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.857924 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.857961 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.857992 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.858027 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.863357 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.863575 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.863678 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.863788 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.863922 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.865681 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.868442 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-ng44q" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.868633 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.870368 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.870514 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.871170 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.871293 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.876209 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.877265 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.885971 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.886255 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.886824 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.888634 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.895158 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.900732 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.909460 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.949930 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.958441 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.972209 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:09:09 crc kubenswrapper[3561]: I1203 00:09:09.989408 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.011331 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 03 00:09:10 crc kubenswrapper[3561]: W1203 00:09:10.015128 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13045510_8717_4a71_ade4_be95a76440a7.slice/crio-9972f9521b3737e582502f660eca29fbb2d01c53ab23c5d08cf38f0824217973 WatchSource:0}: Error finding container 9972f9521b3737e582502f660eca29fbb2d01c53ab23c5d08cf38f0824217973: Status 404 returned error can't find the container with id 9972f9521b3737e582502f660eca29fbb2d01c53ab23c5d08cf38f0824217973 Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.067117 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.067878 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.067911 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.067952 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.067976 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.067998 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.071071 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.071210 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.071375 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.071473 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.071532 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.085396 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.085878 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.086004 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.086119 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.095179 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.096012 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.098981 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.102359 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.103469 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.169188 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.169251 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.169286 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.169316 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.173303 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.173729 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.173939 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.175630 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.181478 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.182527 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.201366 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.204722 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.205422 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.265137 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.270791 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.270841 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.276153 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.300245 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.300455 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.354192 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.358875 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.359093 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.367879 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.372147 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.372196 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.395916 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-79vsd" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.404779 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.432789 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.444838 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.471036 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.471103 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.472847 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.472891 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.472915 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.472936 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.477530 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.477918 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.485600 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.486125 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.486443 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.486771 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.487011 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.527168 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.546642 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.551422 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:09:10 crc kubenswrapper[3561]: W1203 00:09:10.562682 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4092a9f8_5acc_4932_9e90_ef962eeb301a.slice/crio-ed9a703265a24c18752b487cae87144ce0098ccd2888efb9fe8f1cef8a18bc46 WatchSource:0}: Error finding container ed9a703265a24c18752b487cae87144ce0098ccd2888efb9fe8f1cef8a18bc46: Status 404 returned error can't find the container with id ed9a703265a24c18752b487cae87144ce0098ccd2888efb9fe8f1cef8a18bc46 Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.573684 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.573744 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.573775 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.576870 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.577028 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.577209 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.594108 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.596771 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.596961 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.601835 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.609781 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.611568 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"4e5f75996fb26eb6f5affc94d12b5856b0588ba456ef527ac067ed481e6e8ac1"} Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.622290 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"9972f9521b3737e582502f660eca29fbb2d01c53ab23c5d08cf38f0824217973"} Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.626906 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.634059 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerStarted","Data":"40b5d8f8a890f6c7a5c368b69547ed30002bca11556a8bea85db754e5aa9321a"} Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.648288 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.648583 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerStarted","Data":"8b46f4404d993356b2683331d511994d4c73d0ca9b3ad84139e94a79407a7426"} Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.657278 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.670023 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerStarted","Data":"ed9a703265a24c18752b487cae87144ce0098ccd2888efb9fe8f1cef8a18bc46"} Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.676936 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"14472ee99395031b4d8f129c0ecc479508762ca779de178eb4b021a40e7f9a53"} Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.687140 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerStarted","Data":"00f77b83ac29eb45b33532544d3af4bc545005bbcba04a8c11af5746704f0bbb"} Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.688716 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.718897 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerStarted","Data":"a505ea00cb4f55a782dc1f29c43306b5f4a92bfde94f5aa042f409265ccf5674"} Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.726761 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.730189 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.749178 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"3b8195e35f6e9887b2746e906ff0ae6d61159f4b3106aeaa3746739f42e19958"} Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.776749 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.812728 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.820773 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 03 00:09:10 crc kubenswrapper[3561]: I1203 00:09:10.837026 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.766879 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerStarted","Data":"dbffbeff9066d78d98157b77befe54a2279df3d5cd6e2cd43b39b5c5d8badd0b"} Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.770105 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerStarted","Data":"a91dc31df6e8d6165db1ec3ff2be18b1c0a45790e1f8d7791c19cb6dcbbd0619"} Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.776461 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"09f878d40703eabb9a79febfcbde95f362be2ebbacabe5656d03a4d9029ccdde"} Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.777050 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"e71141e2565d97afd75c19bab7aa244cb959020aaefd3218618972965ec042e3"} Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.781989 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerStarted","Data":"311c603dc0c9101a24dd41f5991b8167ffd0199cb6f7a2e68c9c996ddbd8c845"} Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.795689 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerStarted","Data":"df60e5ccd49e54e216f8f60736988543c0cd186ec99207a486e379a6f5388d42"} Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.802348 3561 generic.go:334] "Generic (PLEG): container finished" podID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerID="2a0bbf85d1997f9d176a5cd905decd6099f2f956127e19bc3e046a234701588e" exitCode=0 Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.802511 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerDied","Data":"2a0bbf85d1997f9d176a5cd905decd6099f2f956127e19bc3e046a234701588e"} Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.804963 3561 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.806873 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"61a2701e5f34c9f05b6d21eba7d248c430918b7fe512ea8657a570d9ce4f9a4a"} Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.809171 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerStarted","Data":"1fb8f4bda5b0f9dcd4cb47407fae331100c2697ea1d031206c6e23f6dadf0143"} Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.810045 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" event={"ID":"01feb2e0-a0f4-4573-8335-34e364e0ef40","Type":"ContainerStarted","Data":"d69a2c07f3962d2765b3f973e8347b9653a4fd843bc39f5eec2595d843cd5863"} Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.813687 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"b972eba8e1c66d40563b04ea038158df4628f045f4141ee78f81b2216f54a86d"} Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.814779 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"e1d942bd1532d64a26963313ef42c05cb54b55199598d56a821c439612ac3a9e"} Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.815783 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"886e680349fa8e37bebcf577b162e7bd814103576c213be5d4f24628b58704eb"} Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.818079 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerStarted","Data":"7d8c574860953abc64846cbef1ec027141964e680071f47825914dc7c1362e21"} Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.819508 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerStarted","Data":"996f26f6f53b896f9b6f7c29985b02e10393505bcf1edfab3e6023953cf64932"} Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.828939 3561 generic.go:334] "Generic (PLEG): container finished" podID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerID="8e64213b0065caa5df076b2c2fef8e20f83de78e5235fbfe6a2138215029aa76" exitCode=0 Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.829031 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerDied","Data":"8e64213b0065caa5df076b2c2fef8e20f83de78e5235fbfe6a2138215029aa76"} Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.832883 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-644bb77b49-5x5xk" event={"ID":"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1","Type":"ContainerStarted","Data":"75a2dfacbb7ed469bad416775678301a4c974af21e0a3889e8bc34818d63b12f"} Dec 03 00:09:11 crc kubenswrapper[3561]: I1203 00:09:11.834868 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"73f48bc8ea354e7c5917cfdd37cef4cc2e9436c9063a25f10b4f4369b944915c"} Dec 03 00:09:12 crc kubenswrapper[3561]: W1203 00:09:12.253746 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod530553aa_0a1d_423e_8a22_f5eb4bdbb883.slice/crio-7c57990c2020d651ea7789553042f81fd2f261945ca528397670abcbe37af64f WatchSource:0}: Error finding container 7c57990c2020d651ea7789553042f81fd2f261945ca528397670abcbe37af64f: Status 404 returned error can't find the container with id 7c57990c2020d651ea7789553042f81fd2f261945ca528397670abcbe37af64f Dec 03 00:09:12 crc kubenswrapper[3561]: W1203 00:09:12.265121 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd556935_a077_45df_ba3f_d42c39326ccd.slice/crio-49795cf4aee3a801d88b7bccf35baadf5bb7f2b674d00e0f7ee8a1f8b34c4c9a WatchSource:0}: Error finding container 49795cf4aee3a801d88b7bccf35baadf5bb7f2b674d00e0f7ee8a1f8b34c4c9a: Status 404 returned error can't find the container with id 49795cf4aee3a801d88b7bccf35baadf5bb7f2b674d00e0f7ee8a1f8b34c4c9a Dec 03 00:09:12 crc kubenswrapper[3561]: W1203 00:09:12.768995 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f4dca86_e6ee_4ec9_8324_86aff960225e.slice/crio-8c10a5ea03afba93ab10fb66f2842e3202e3bbbce3f0371261d16fb6def96374 WatchSource:0}: Error finding container 8c10a5ea03afba93ab10fb66f2842e3202e3bbbce3f0371261d16fb6def96374: Status 404 returned error can't find the container with id 8c10a5ea03afba93ab10fb66f2842e3202e3bbbce3f0371261d16fb6def96374 Dec 03 00:09:12 crc kubenswrapper[3561]: W1203 00:09:12.774158 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1620f19_8aa3_45cf_931b_7ae0e5cd14cf.slice/crio-a7d03e14e775aa2b40ee50fceab76e968e8e28c16f68b8b31edc68e20eb21b04 WatchSource:0}: Error finding container a7d03e14e775aa2b40ee50fceab76e968e8e28c16f68b8b31edc68e20eb21b04: Status 404 returned error can't find the container with id a7d03e14e775aa2b40ee50fceab76e968e8e28c16f68b8b31edc68e20eb21b04 Dec 03 00:09:12 crc kubenswrapper[3561]: W1203 00:09:12.811020 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43ae1c37_047b_4ee2_9fee_41e337dd4ac8.slice/crio-84536cf4c575cc7687b80f694213f43615df9c10fc16e9496dfa063f2490414a WatchSource:0}: Error finding container 84536cf4c575cc7687b80f694213f43615df9c10fc16e9496dfa063f2490414a: Status 404 returned error can't find the container with id 84536cf4c575cc7687b80f694213f43615df9c10fc16e9496dfa063f2490414a Dec 03 00:09:12 crc kubenswrapper[3561]: I1203 00:09:12.847748 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" event={"ID":"d0f40333-c860-4c04-8058-a0bf572dcf12","Type":"ContainerStarted","Data":"78d74fbd47dde5f05d57e79b99cd00980b6cbbad7c9730069cdf5103a23052c1"} Dec 03 00:09:12 crc kubenswrapper[3561]: W1203 00:09:12.924153 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45a8038e_e7f2_4d93_a6f5_7753aa54e63f.slice/crio-77881707b779c77220c14a0ac8db372794a8e37806be902691f438329da7c5b5 WatchSource:0}: Error finding container 77881707b779c77220c14a0ac8db372794a8e37806be902691f438329da7c5b5: Status 404 returned error can't find the container with id 77881707b779c77220c14a0ac8db372794a8e37806be902691f438329da7c5b5 Dec 03 00:09:12 crc kubenswrapper[3561]: I1203 00:09:12.937737 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" event={"ID":"0b5d722a-1123-4935-9740-52a08d018bc9","Type":"ContainerStarted","Data":"6baccd464ae80694b02fdd31e44827944efc16ace6439401164d403b01efd4b9"} Dec 03 00:09:12 crc kubenswrapper[3561]: I1203 00:09:12.949685 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerStarted","Data":"84536cf4c575cc7687b80f694213f43615df9c10fc16e9496dfa063f2490414a"} Dec 03 00:09:12 crc kubenswrapper[3561]: I1203 00:09:12.953167 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerStarted","Data":"7130610e12abe903ec4231b165ecf812b4226f7c5a87fe686a10c1e366fccef1"} Dec 03 00:09:12 crc kubenswrapper[3561]: I1203 00:09:12.957465 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerStarted","Data":"a396bc95e5ce0debc7086a2a80ce9b98f8ba51145da616f667be804b4f347e51"} Dec 03 00:09:12 crc kubenswrapper[3561]: I1203 00:09:12.961938 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerStarted","Data":"2f74053c54081780d15efae991c763370058b654093dd19dbb6d5e8da85e6070"} Dec 03 00:09:12 crc kubenswrapper[3561]: W1203 00:09:12.965638 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34a48baf_1bee_4921_8bb2_9b7320e76f79.slice/crio-39006bc2f3ee512f73f8e6f6d4675dfb16c79c795430946a4a2dfa3548566fc8 WatchSource:0}: Error finding container 39006bc2f3ee512f73f8e6f6d4675dfb16c79c795430946a4a2dfa3548566fc8: Status 404 returned error can't find the container with id 39006bc2f3ee512f73f8e6f6d4675dfb16c79c795430946a4a2dfa3548566fc8 Dec 03 00:09:12 crc kubenswrapper[3561]: I1203 00:09:12.980534 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"f4ca19a3dfada1e9168ef6f6bd408cf4ccec320f8b7b05e6daf4a71b41c52757"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.001815 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"894fb45ca28622df1d5ad1205fcbde60d55cbbb3cea454b3cdbc972949335510"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.006329 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerStarted","Data":"8c10a5ea03afba93ab10fb66f2842e3202e3bbbce3f0371261d16fb6def96374"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.009986 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"7c57990c2020d651ea7789553042f81fd2f261945ca528397670abcbe37af64f"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.018063 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"f7be272426a1c83f5742461c42dda42158d951a772202208799d00f0e04b431f"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.024261 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerStarted","Data":"5e6078e04b080b0d75c3506d920566fafd2f6ffa4fb92214adaa8e896518e379"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.036721 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"a2b1dbcc6c818d8e9bdf16ee560db1672a215e32fdc0381defac15853e9cd9a4"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.052664 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"278e92a61e719c7354e34b5c40c09e7e46fb4b66a0700eb8d7cb516b74f5459e"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.059677 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerStarted","Data":"610e7734cb84ac6f6725d148a654c9c699b4add4a9ea639b68213c6c7c8dd19b"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.072246 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" event={"ID":"cf1a8966-f594-490a-9fbb-eec5bafd13d3","Type":"ContainerStarted","Data":"1825ead4f6fc0b0d0c0c1db5d438312658b675396b2da93faab065135dda7657"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.074408 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerStarted","Data":"a7d03e14e775aa2b40ee50fceab76e968e8e28c16f68b8b31edc68e20eb21b04"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.078887 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-644bb77b49-5x5xk" event={"ID":"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1","Type":"ContainerStarted","Data":"b9d774930bf8fa012565cf9d4b2afe4f5f05e655620884e7c2860625c9035163"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.099029 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerStarted","Data":"ea8981f05017b47c926ce56a53039ce937e19d02f85e5559f29d519c9b0449bd"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.100423 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"0534e0ba6cf94c4bb43379fbdd4c7372ee70f2f46904c094e0b72512d0356918"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.109771 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerStarted","Data":"0a2a7c0317d4ab34cc836f681bceabbfcfdc0bfd8adbbace822d794922eb11a8"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.111649 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.113591 3561 patch_prober.go:28] interesting pod/controller-manager-778975cc4f-x5vcf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" start-of-body= Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.113632 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.116488 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" event={"ID":"e4a7de23-6134-4044-902a-0900dc04a501","Type":"ContainerStarted","Data":"a3dd6646149e0208fd01036562c2fbf0130619486214eb054f73a4cb1a3b8c34"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.121099 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"854e4a1c87075733439780d3405e762428e6420577ec7bb5656f4303a5e89445"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.122837 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"52c6def12b0f67c5de5f757b58077d1be2c8ddd2367ba1fdb7cbb700c0f14064"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.130223 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" event={"ID":"bd556935-a077-45df-ba3f-d42c39326ccd","Type":"ContainerStarted","Data":"49795cf4aee3a801d88b7bccf35baadf5bb7f2b674d00e0f7ee8a1f8b34c4c9a"} Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.131246 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.141764 3561 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Dec 03 00:09:13 crc kubenswrapper[3561]: I1203 00:09:13.141825 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.174551 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerStarted","Data":"a2f52dc58226949d177d01123004a600775391ba4d9dd3b979e16fb42f1a0891"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.186790 3561 generic.go:334] "Generic (PLEG): container finished" podID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerID="5d12d187edd4e4fce7c355126d41f153076bfece11d22f698f903b4228055031" exitCode=0 Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.186855 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerDied","Data":"5d12d187edd4e4fce7c355126d41f153076bfece11d22f698f903b4228055031"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.206752 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"77881707b779c77220c14a0ac8db372794a8e37806be902691f438329da7c5b5"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.222193 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" event={"ID":"c085412c-b875-46c9-ae3e-e6b0d8067091","Type":"ContainerStarted","Data":"b558051e00d455dd9859c405342a5b936788a6ba248b6e6c23a35a92b9071b1c"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.343575 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerStarted","Data":"92c444cf4642bba423dba7e3d162a509bf0921a134f3601eb81a056163e4b349"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.345142 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerStarted","Data":"e5f886129b3fc3a8f1a02b09bb188a8d182a3ff152f67674033a7ce9936fa140"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.372522 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" event={"ID":"bd556935-a077-45df-ba3f-d42c39326ccd","Type":"ContainerStarted","Data":"88e2ee035013099a62bbba16c0a8427865ae415b6c4707a0eb8dba627168ba01"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.373796 3561 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.373831 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.375020 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" event={"ID":"8a5ae51d-d173-4531-8975-f164c975ce1f","Type":"ContainerStarted","Data":"1f20c0620c20466583cd5e68650251acb02b3238a4346a9120e17a292bbe7281"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.376332 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"414ca8ef703d35c37212f841a2ea0833096c25fdbb78c4cd4c325e40c1a8ac51"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.378414 3561 generic.go:334] "Generic (PLEG): container finished" podID="41e8708a-e40d-4d28-846b-c52eda4d1755" containerID="894fb45ca28622df1d5ad1205fcbde60d55cbbb3cea454b3cdbc972949335510" exitCode=0 Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.378449 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerDied","Data":"894fb45ca28622df1d5ad1205fcbde60d55cbbb3cea454b3cdbc972949335510"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.393765 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" event={"ID":"01feb2e0-a0f4-4573-8335-34e364e0ef40","Type":"ContainerStarted","Data":"41c5aa77cfc55ec7e6c26613468878444d505787b4b4a003e56a41144b2553cb"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.394014 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.425043 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" event={"ID":"0b5d722a-1123-4935-9740-52a08d018bc9","Type":"ContainerStarted","Data":"1f07b671cf102476db445e48075e2470ed09a774f273f96979015a7df3eec4d3"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.433671 3561 patch_prober.go:28] interesting pod/oauth-openshift-74fc7c67cc-xqf8b container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.72:6443/healthz\": dial tcp 10.217.0.72:6443: connect: connection refused" start-of-body= Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.433757 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.72:6443/healthz\": dial tcp 10.217.0.72:6443: connect: connection refused" Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.443242 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.456047 3561 patch_prober.go:28] interesting pod/route-controller-manager-776b8b7477-sfpvs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" start-of-body= Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.456187 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.482726 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"f468121f8be485549cc81b41aa58862819c2cd691e070441fd1d480db662a7d0"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.492180 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"cb224643731b1ee3b8ef31c1755459754374c1da850b0f08fc525d8075b1a3b5"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.493880 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"befa01ee069f03bb75ab476b1536e77a5d8e3266a21c7ea02a464605bdd8c9eb"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.570231 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" event={"ID":"59748b9b-c309-4712-aa85-bb38d71c4915","Type":"ContainerStarted","Data":"3720b993f1687d6b434d2377cd36dc57333769293a8480c0b2ce8ee9c3ccff66"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.639846 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"1499b0a82e550785d8f70adb113e37d3a8bb9ca0b6b6cf94de67533ec28889f7"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.642243 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.657710 3561 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.657761 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.659360 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"7579f3e358a29c4732a313f34eec317113776a7a287e3358b97b8bd7e7d94f62"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.662185 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"9653550bbcc029e1d696b89864e94126127d439338bbaffb51aed6579b82c63a"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.662982 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gbw49" Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.690239 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerStarted","Data":"57c6c13063afff30707262757b08ff2e16a0e457e3717f86b46a01b19719b47f"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.692084 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.697891 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" event={"ID":"34a48baf-1bee-4921-8bb2-9b7320e76f79","Type":"ContainerStarted","Data":"39006bc2f3ee512f73f8e6f6d4675dfb16c79c795430946a4a2dfa3548566fc8"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.704573 3561 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.704636 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.705893 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"3a1edd4972a45fe01708b260f07cf9f72a028c47c708c7ebf8426b6dd4c91424"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.708238 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"ef9cb67a08d85b9958dc7e046a3180d03b3fb3fed63dba58bcfdcc2388abb168"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.790058 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" event={"ID":"cf1a8966-f594-490a-9fbb-eec5bafd13d3","Type":"ContainerStarted","Data":"972291988b24acb778f60c494ec09018ab055d609bcd23bf19dbb18e73825bf3"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.796756 3561 generic.go:334] "Generic (PLEG): container finished" podID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerID="b0674e8ded85ff264844e27dfcf49975d2b396b0ae592f1888c0e31073d3a577" exitCode=0 Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.796857 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerDied","Data":"b0674e8ded85ff264844e27dfcf49975d2b396b0ae592f1888c0e31073d3a577"} Dec 03 00:09:14 crc kubenswrapper[3561]: I1203 00:09:14.815029 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.844374 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" event={"ID":"e4a7de23-6134-4044-902a-0900dc04a501","Type":"ContainerStarted","Data":"89db06d507e72038f47580c9c6a6a3c0db0efe1cd966cdb3478ff781b0b12404"} Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.849874 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerStarted","Data":"9c3fe397c3f6654e9143c043f198cb80563bc16f072d9cae72569e31c3664a09"} Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.850778 3561 patch_prober.go:28] interesting pod/route-controller-manager-776b8b7477-sfpvs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" start-of-body= Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.850809 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.851980 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" event={"ID":"d0f40333-c860-4c04-8058-a0bf572dcf12","Type":"ContainerStarted","Data":"d7993c89e09f83605c776768ae3a9b3c748075f5ed48c07a7c0a217088637695"} Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.854473 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"8f8fd92b326fef871c2f42798b1ee5b67548e8529ed70f38cccdda1f52110147"} Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.899645 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"c5b4b3b3b83d57b4a9b4508ed2710715da7c0e88eddcdefbb4a6733f7edf1543"} Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.899674 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"11e0399e8c3b82c96293dffe5ec318c07f7c02f8f4b6099352821b4384c729c9"} Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.904657 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"95a13ff559928f477d4b224eba47e6763203cb9ce5d0266ba7aade6dc0af028d"} Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.916857 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" event={"ID":"c085412c-b875-46c9-ae3e-e6b0d8067091","Type":"ContainerStarted","Data":"611436eb836aa0e44210234ce5889857e59ef4cddd79239893c96a047d7874a8"} Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.917335 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.936592 3561 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.936671 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.937384 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerStarted","Data":"20f7dafccc74510f4f4d106c99e3ac15d456af159e3773aaa2c55c8e780a40c1"} Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.938421 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" event={"ID":"8a5ae51d-d173-4531-8975-f164c975ce1f","Type":"ContainerStarted","Data":"05ae4d19d4952014e48e84508374b8fe6b1868ec1f0020c8a41f9639b48a0657"} Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.939157 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.941039 3561 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.941116 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.941417 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"1fe36d01c731ca5dffc33ac72fe6785d9efbdb711013c41fe9bc6c10aee440a0"} Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.959528 3561 generic.go:334] "Generic (PLEG): container finished" podID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerID="fcbc5ae6af618f2983302ff7429b16225f583abd4ac51dee42b8005898a41f08" exitCode=0 Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.959645 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerDied","Data":"fcbc5ae6af618f2983302ff7429b16225f583abd4ac51dee42b8005898a41f08"} Dec 03 00:09:15 crc kubenswrapper[3561]: I1203 00:09:15.981193 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"3ab27a884551b2ef9bdb3ab4a34a538d52ade36bf6194a4eb14ff809f688988e"} Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.023187 3561 generic.go:334] "Generic (PLEG): container finished" podID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerID="07f8145e07ed50c07049a638fbe8c9f4f3c7df5eea63eddc880fd6f9033f6fe0" exitCode=0 Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.023297 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerDied","Data":"07f8145e07ed50c07049a638fbe8c9f4f3c7df5eea63eddc880fd6f9033f6fe0"} Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.040684 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" event={"ID":"59748b9b-c309-4712-aa85-bb38d71c4915","Type":"ContainerStarted","Data":"074c52a7d2379b6c1568672b382db0ba355b420f73642436c60de44633ef4f86"} Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.042530 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.047050 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"687006e0d2ea237bcc719f662bfa9eca5c439ae5700ee6164a542179895c5f1f"} Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.047328 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.062420 3561 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.062494 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.140716 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"cc50d949569b37a4a756ebd3adb801966c31a7b5c10d068cb4d9aa3385dcfbba"} Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.140805 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.145316 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"dc029634ff3858cf6abe9c8215be49afd01417fc98a25a7c515d4a3bbd259f8f"} Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.157307 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerStarted","Data":"776f15d94312bf14a4965d950afa25ab275c1dc54deb88cc295ce08a1c1f6c96"} Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.162916 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" event={"ID":"34a48baf-1bee-4921-8bb2-9b7320e76f79","Type":"ContainerStarted","Data":"20518902739f8cf148aaaf9d2270ab907e2a8b344a76fc0299b51a71d9a5c9c1"} Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.163374 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.170443 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerStarted","Data":"df0a7f71b0150ccc1f00bc38ac4cc2ba0cdf59610f77f8b015535a12eb683b6b"} Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.180075 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"eee88287a9c1a25153257c14c4247f5c9a3b427e89cee85e04ea2b9abdab7d71"} Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.181081 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.189094 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerStarted","Data":"66da3ae1f8e0db6b45524421fe8ec357786749bf1d26bf198facae61b8beb724"} Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.191145 3561 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.191183 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.193665 3561 generic.go:334] "Generic (PLEG): container finished" podID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerID="8c2c125de5ee1786510cccaf4b10a48a62a125b7012a001c74f3d6c43a7c221e" exitCode=0 Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.194801 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"8c2c125de5ee1786510cccaf4b10a48a62a125b7012a001c74f3d6c43a7c221e"} Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.199096 3561 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.199157 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.199151 3561 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Dec 03 00:09:16 crc kubenswrapper[3561]: I1203 00:09:16.199224 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Dec 03 00:09:17 crc kubenswrapper[3561]: I1203 00:09:17.151136 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 03 00:09:17 crc kubenswrapper[3561]: I1203 00:09:17.200917 3561 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:09:17 crc kubenswrapper[3561]: I1203 00:09:17.200994 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:09:17 crc kubenswrapper[3561]: I1203 00:09:17.226144 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"b953d0b171ff7e6f50b3c040bc5b29ee1d3992aba18c48c63f0b19a4c51d5578"} Dec 03 00:09:17 crc kubenswrapper[3561]: I1203 00:09:17.311619 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerStarted","Data":"c1be877eb38f31d5f82217ae8b7ab37b5b70973ddf428bbe3c6b01613a9ae4e7"} Dec 03 00:09:17 crc kubenswrapper[3561]: I1203 00:09:17.315248 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"892b64117e488487be3e683acf5bceb8f95efb1aafa992cc98de7cc51546cb0e"} Dec 03 00:09:17 crc kubenswrapper[3561]: I1203 00:09:17.339753 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"e99fff07409cf7611a60974d78d158c2f40cb2cb120b9cbef6a11148b1eae3a5"} Dec 03 00:09:17 crc kubenswrapper[3561]: I1203 00:09:17.459621 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"ddb6dd3feca0168e1e287ef39d964ed8f96ff2708fbf0d78ddd532fd95014a51"} Dec 03 00:09:17 crc kubenswrapper[3561]: I1203 00:09:17.476749 3561 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 03 00:09:17 crc kubenswrapper[3561]: I1203 00:09:17.476796 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 03 00:09:17 crc kubenswrapper[3561]: I1203 00:09:17.477223 3561 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 03 00:09:17 crc kubenswrapper[3561]: I1203 00:09:17.477248 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 03 00:09:17 crc kubenswrapper[3561]: I1203 00:09:17.478792 3561 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 03 00:09:17 crc kubenswrapper[3561]: I1203 00:09:17.478822 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 03 00:09:17 crc kubenswrapper[3561]: I1203 00:09:17.504354 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 03 00:09:17 crc kubenswrapper[3561]: I1203 00:09:17.625412 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:09:18 crc kubenswrapper[3561]: I1203 00:09:18.572714 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"f38e90fe8fce5f32e6e576701c3ed909be587b9efb408b8f2b54c185d3feb7a9"} Dec 03 00:09:18 crc kubenswrapper[3561]: I1203 00:09:18.581370 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 03 00:09:19 crc kubenswrapper[3561]: I1203 00:09:19.205858 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 03 00:09:19 crc kubenswrapper[3561]: I1203 00:09:19.869491 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:19 crc kubenswrapper[3561]: I1203 00:09:19.869799 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:19 crc kubenswrapper[3561]: I1203 00:09:19.878513 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:19 crc kubenswrapper[3561]: I1203 00:09:19.878561 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:19 crc kubenswrapper[3561]: I1203 00:09:19.881556 3561 patch_prober.go:28] interesting pod/console-644bb77b49-5x5xk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" start-of-body= Dec 03 00:09:19 crc kubenswrapper[3561]: I1203 00:09:19.884155 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" Dec 03 00:09:20 crc kubenswrapper[3561]: I1203 00:09:19.959825 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:20 crc kubenswrapper[3561]: I1203 00:09:19.961349 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:20 crc kubenswrapper[3561]: I1203 00:09:19.999751 3561 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Dec 03 00:09:20 crc kubenswrapper[3561]: I1203 00:09:19.999826 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Dec 03 00:09:20 crc kubenswrapper[3561]: I1203 00:09:20.000771 3561 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Dec 03 00:09:20 crc kubenswrapper[3561]: I1203 00:09:20.000809 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Dec 03 00:09:20 crc kubenswrapper[3561]: I1203 00:09:20.120110 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 03 00:09:20 crc kubenswrapper[3561]: I1203 00:09:20.439576 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 03 00:09:20 crc kubenswrapper[3561]: I1203 00:09:20.467553 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 03 00:09:20 crc kubenswrapper[3561]: I1203 00:09:20.498728 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:20 crc kubenswrapper[3561]: I1203 00:09:20.633900 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 03 00:09:20 crc kubenswrapper[3561]: I1203 00:09:20.733258 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:09:21 crc kubenswrapper[3561]: I1203 00:09:21.441313 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gbw49" Dec 03 00:09:21 crc kubenswrapper[3561]: I1203 00:09:21.522763 3561 patch_prober.go:28] interesting pod/apiserver-7fc54b8dd7-d2bhp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 03 00:09:21 crc kubenswrapper[3561]: [+]log ok Dec 03 00:09:21 crc kubenswrapper[3561]: [+]etcd ok Dec 03 00:09:21 crc kubenswrapper[3561]: [+]poststarthook/generic-apiserver-start-informers ok Dec 03 00:09:21 crc kubenswrapper[3561]: [+]poststarthook/max-in-flight-filter ok Dec 03 00:09:21 crc kubenswrapper[3561]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 03 00:09:21 crc kubenswrapper[3561]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 03 00:09:21 crc kubenswrapper[3561]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 03 00:09:21 crc kubenswrapper[3561]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 03 00:09:21 crc kubenswrapper[3561]: [+]poststarthook/project.openshift.io-projectcache ok Dec 03 00:09:21 crc kubenswrapper[3561]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 03 00:09:21 crc kubenswrapper[3561]: [+]poststarthook/openshift.io-startinformers ok Dec 03 00:09:21 crc kubenswrapper[3561]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 03 00:09:21 crc kubenswrapper[3561]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 03 00:09:21 crc kubenswrapper[3561]: healthz check failed Dec 03 00:09:21 crc kubenswrapper[3561]: I1203 00:09:21.522841 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:22 crc kubenswrapper[3561]: I1203 00:09:22.718844 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"37e0787119ad528aa29114a23ee5da07fc1ae15b768cb124fb2be187241201bb"} Dec 03 00:09:22 crc kubenswrapper[3561]: I1203 00:09:22.877444 3561 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 03 00:09:23 crc kubenswrapper[3561]: I1203 00:09:23.001436 3561 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-03T00:09:22.877818425Z","Handler":null,"Name":""} Dec 03 00:09:23 crc kubenswrapper[3561]: I1203 00:09:23.010895 3561 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 03 00:09:23 crc kubenswrapper[3561]: I1203 00:09:23.010950 3561 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 03 00:09:24 crc kubenswrapper[3561]: I1203 00:09:24.873847 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:24 crc kubenswrapper[3561]: I1203 00:09:24.881167 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 03 00:09:27 crc kubenswrapper[3561]: I1203 00:09:27.623362 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:09:27 crc kubenswrapper[3561]: I1203 00:09:27.623455 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:09:29 crc kubenswrapper[3561]: I1203 00:09:29.882637 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:29 crc kubenswrapper[3561]: I1203 00:09:29.891341 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-644bb77b49-5x5xk" Dec 03 00:09:29 crc kubenswrapper[3561]: I1203 00:09:29.990579 3561 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Dec 03 00:09:29 crc kubenswrapper[3561]: I1203 00:09:29.990666 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Dec 03 00:09:29 crc kubenswrapper[3561]: I1203 00:09:29.991227 3561 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Dec 03 00:09:29 crc kubenswrapper[3561]: I1203 00:09:29.991266 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.219064 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29412000-czw72"] Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.219529 3561 topology_manager.go:215] "Topology Admit Handler" podUID="a5813daf-5020-40ff-9715-a2ce6abf39c3" podNamespace="openshift-image-registry" podName="image-pruner-29412000-czw72" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.220631 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29412000-czw72" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.222746 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-nzhll" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.223018 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.225061 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r7bqm"] Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.225207 3561 topology_manager.go:215] "Topology Admit Handler" podUID="4c5b1235-88b1-4e71-b697-04c9f657382e" podNamespace="openshift-marketplace" podName="redhat-marketplace-r7bqm" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.226367 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7bqm" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.232730 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fp8v6"] Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.232853 3561 topology_manager.go:215] "Topology Admit Handler" podUID="33f76c53-6f7f-475a-a091-33fe0506eb7d" podNamespace="openshift-marketplace" podName="redhat-operators-fp8v6" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.234130 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fp8v6" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.236037 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-27fdr"] Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.236208 3561 topology_manager.go:215] "Topology Admit Handler" podUID="8a653300-bd4c-4c3f-ad33-e102862155b1" podNamespace="openshift-marketplace" podName="certified-operators-27fdr" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.238428 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27fdr" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.241947 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt"] Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.242067 3561 topology_manager.go:215] "Topology Admit Handler" podUID="4f46bfa4-9000-4c75-9e86-49671ca56ef0" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29412000-zt5qt" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.243006 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.248374 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.248686 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.302400 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33f76c53-6f7f-475a-a091-33fe0506eb7d-catalog-content\") pod \"redhat-operators-fp8v6\" (UID: \"33f76c53-6f7f-475a-a091-33fe0506eb7d\") " pod="openshift-marketplace/redhat-operators-fp8v6" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.315789 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f46bfa4-9000-4c75-9e86-49671ca56ef0-config-volume\") pod \"collect-profiles-29412000-zt5qt\" (UID: \"4f46bfa4-9000-4c75-9e86-49671ca56ef0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.305644 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-27fdr"] Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.316237 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29412000-czw72"] Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.316160 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9hmq\" (UniqueName: \"kubernetes.io/projected/8a653300-bd4c-4c3f-ad33-e102862155b1-kube-api-access-t9hmq\") pod \"certified-operators-27fdr\" (UID: \"8a653300-bd4c-4c3f-ad33-e102862155b1\") " pod="openshift-marketplace/certified-operators-27fdr" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.316462 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn592\" (UniqueName: \"kubernetes.io/projected/4c5b1235-88b1-4e71-b697-04c9f657382e-kube-api-access-pn592\") pod \"redhat-marketplace-r7bqm\" (UID: \"4c5b1235-88b1-4e71-b697-04c9f657382e\") " pod="openshift-marketplace/redhat-marketplace-r7bqm" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.316600 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33f76c53-6f7f-475a-a091-33fe0506eb7d-utilities\") pod \"redhat-operators-fp8v6\" (UID: \"33f76c53-6f7f-475a-a091-33fe0506eb7d\") " pod="openshift-marketplace/redhat-operators-fp8v6" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.318955 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxdsc\" (UniqueName: \"kubernetes.io/projected/4f46bfa4-9000-4c75-9e86-49671ca56ef0-kube-api-access-gxdsc\") pod \"collect-profiles-29412000-zt5qt\" (UID: \"4f46bfa4-9000-4c75-9e86-49671ca56ef0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.319179 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f46bfa4-9000-4c75-9e86-49671ca56ef0-secret-volume\") pod \"collect-profiles-29412000-zt5qt\" (UID: \"4f46bfa4-9000-4c75-9e86-49671ca56ef0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.319331 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c5b1235-88b1-4e71-b697-04c9f657382e-catalog-content\") pod \"redhat-marketplace-r7bqm\" (UID: \"4c5b1235-88b1-4e71-b697-04c9f657382e\") " pod="openshift-marketplace/redhat-marketplace-r7bqm" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.319496 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv9zh\" (UniqueName: \"kubernetes.io/projected/33f76c53-6f7f-475a-a091-33fe0506eb7d-kube-api-access-nv9zh\") pod \"redhat-operators-fp8v6\" (UID: \"33f76c53-6f7f-475a-a091-33fe0506eb7d\") " pod="openshift-marketplace/redhat-operators-fp8v6" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.319692 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c5b1235-88b1-4e71-b697-04c9f657382e-utilities\") pod \"redhat-marketplace-r7bqm\" (UID: \"4c5b1235-88b1-4e71-b697-04c9f657382e\") " pod="openshift-marketplace/redhat-marketplace-r7bqm" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.325651 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a653300-bd4c-4c3f-ad33-e102862155b1-utilities\") pod \"certified-operators-27fdr\" (UID: \"8a653300-bd4c-4c3f-ad33-e102862155b1\") " pod="openshift-marketplace/certified-operators-27fdr" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.325811 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a653300-bd4c-4c3f-ad33-e102862155b1-catalog-content\") pod \"certified-operators-27fdr\" (UID: \"8a653300-bd4c-4c3f-ad33-e102862155b1\") " pod="openshift-marketplace/certified-operators-27fdr" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.325874 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j97j9\" (UniqueName: \"kubernetes.io/projected/a5813daf-5020-40ff-9715-a2ce6abf39c3-kube-api-access-j97j9\") pod \"image-pruner-29412000-czw72\" (UID: \"a5813daf-5020-40ff-9715-a2ce6abf39c3\") " pod="openshift-image-registry/image-pruner-29412000-czw72" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.325980 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a5813daf-5020-40ff-9715-a2ce6abf39c3-serviceca\") pod \"image-pruner-29412000-czw72\" (UID: \"a5813daf-5020-40ff-9715-a2ce6abf39c3\") " pod="openshift-image-registry/image-pruner-29412000-czw72" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.326158 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt"] Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.330672 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7bqm"] Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.350233 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fp8v6"] Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.427468 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j97j9\" (UniqueName: \"kubernetes.io/projected/a5813daf-5020-40ff-9715-a2ce6abf39c3-kube-api-access-j97j9\") pod \"image-pruner-29412000-czw72\" (UID: \"a5813daf-5020-40ff-9715-a2ce6abf39c3\") " pod="openshift-image-registry/image-pruner-29412000-czw72" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.427557 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a5813daf-5020-40ff-9715-a2ce6abf39c3-serviceca\") pod \"image-pruner-29412000-czw72\" (UID: \"a5813daf-5020-40ff-9715-a2ce6abf39c3\") " pod="openshift-image-registry/image-pruner-29412000-czw72" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.427599 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33f76c53-6f7f-475a-a091-33fe0506eb7d-catalog-content\") pod \"redhat-operators-fp8v6\" (UID: \"33f76c53-6f7f-475a-a091-33fe0506eb7d\") " pod="openshift-marketplace/redhat-operators-fp8v6" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.427626 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-t9hmq\" (UniqueName: \"kubernetes.io/projected/8a653300-bd4c-4c3f-ad33-e102862155b1-kube-api-access-t9hmq\") pod \"certified-operators-27fdr\" (UID: \"8a653300-bd4c-4c3f-ad33-e102862155b1\") " pod="openshift-marketplace/certified-operators-27fdr" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.427645 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f46bfa4-9000-4c75-9e86-49671ca56ef0-config-volume\") pod \"collect-profiles-29412000-zt5qt\" (UID: \"4f46bfa4-9000-4c75-9e86-49671ca56ef0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.427675 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pn592\" (UniqueName: \"kubernetes.io/projected/4c5b1235-88b1-4e71-b697-04c9f657382e-kube-api-access-pn592\") pod \"redhat-marketplace-r7bqm\" (UID: \"4c5b1235-88b1-4e71-b697-04c9f657382e\") " pod="openshift-marketplace/redhat-marketplace-r7bqm" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.427702 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33f76c53-6f7f-475a-a091-33fe0506eb7d-utilities\") pod \"redhat-operators-fp8v6\" (UID: \"33f76c53-6f7f-475a-a091-33fe0506eb7d\") " pod="openshift-marketplace/redhat-operators-fp8v6" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.427731 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-gxdsc\" (UniqueName: \"kubernetes.io/projected/4f46bfa4-9000-4c75-9e86-49671ca56ef0-kube-api-access-gxdsc\") pod \"collect-profiles-29412000-zt5qt\" (UID: \"4f46bfa4-9000-4c75-9e86-49671ca56ef0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.427755 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f46bfa4-9000-4c75-9e86-49671ca56ef0-secret-volume\") pod \"collect-profiles-29412000-zt5qt\" (UID: \"4f46bfa4-9000-4c75-9e86-49671ca56ef0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.427775 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c5b1235-88b1-4e71-b697-04c9f657382e-catalog-content\") pod \"redhat-marketplace-r7bqm\" (UID: \"4c5b1235-88b1-4e71-b697-04c9f657382e\") " pod="openshift-marketplace/redhat-marketplace-r7bqm" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.427816 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nv9zh\" (UniqueName: \"kubernetes.io/projected/33f76c53-6f7f-475a-a091-33fe0506eb7d-kube-api-access-nv9zh\") pod \"redhat-operators-fp8v6\" (UID: \"33f76c53-6f7f-475a-a091-33fe0506eb7d\") " pod="openshift-marketplace/redhat-operators-fp8v6" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.427836 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c5b1235-88b1-4e71-b697-04c9f657382e-utilities\") pod \"redhat-marketplace-r7bqm\" (UID: \"4c5b1235-88b1-4e71-b697-04c9f657382e\") " pod="openshift-marketplace/redhat-marketplace-r7bqm" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.427862 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a653300-bd4c-4c3f-ad33-e102862155b1-utilities\") pod \"certified-operators-27fdr\" (UID: \"8a653300-bd4c-4c3f-ad33-e102862155b1\") " pod="openshift-marketplace/certified-operators-27fdr" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.427888 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a653300-bd4c-4c3f-ad33-e102862155b1-catalog-content\") pod \"certified-operators-27fdr\" (UID: \"8a653300-bd4c-4c3f-ad33-e102862155b1\") " pod="openshift-marketplace/certified-operators-27fdr" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.428335 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a653300-bd4c-4c3f-ad33-e102862155b1-catalog-content\") pod \"certified-operators-27fdr\" (UID: \"8a653300-bd4c-4c3f-ad33-e102862155b1\") " pod="openshift-marketplace/certified-operators-27fdr" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.428413 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33f76c53-6f7f-475a-a091-33fe0506eb7d-catalog-content\") pod \"redhat-operators-fp8v6\" (UID: \"33f76c53-6f7f-475a-a091-33fe0506eb7d\") " pod="openshift-marketplace/redhat-operators-fp8v6" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.428675 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c5b1235-88b1-4e71-b697-04c9f657382e-catalog-content\") pod \"redhat-marketplace-r7bqm\" (UID: \"4c5b1235-88b1-4e71-b697-04c9f657382e\") " pod="openshift-marketplace/redhat-marketplace-r7bqm" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.428697 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a653300-bd4c-4c3f-ad33-e102862155b1-utilities\") pod \"certified-operators-27fdr\" (UID: \"8a653300-bd4c-4c3f-ad33-e102862155b1\") " pod="openshift-marketplace/certified-operators-27fdr" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.428793 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c5b1235-88b1-4e71-b697-04c9f657382e-utilities\") pod \"redhat-marketplace-r7bqm\" (UID: \"4c5b1235-88b1-4e71-b697-04c9f657382e\") " pod="openshift-marketplace/redhat-marketplace-r7bqm" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.428991 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a5813daf-5020-40ff-9715-a2ce6abf39c3-serviceca\") pod \"image-pruner-29412000-czw72\" (UID: \"a5813daf-5020-40ff-9715-a2ce6abf39c3\") " pod="openshift-image-registry/image-pruner-29412000-czw72" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.429127 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33f76c53-6f7f-475a-a091-33fe0506eb7d-utilities\") pod \"redhat-operators-fp8v6\" (UID: \"33f76c53-6f7f-475a-a091-33fe0506eb7d\") " pod="openshift-marketplace/redhat-operators-fp8v6" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.430612 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f46bfa4-9000-4c75-9e86-49671ca56ef0-config-volume\") pod \"collect-profiles-29412000-zt5qt\" (UID: \"4f46bfa4-9000-4c75-9e86-49671ca56ef0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.447122 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f46bfa4-9000-4c75-9e86-49671ca56ef0-secret-volume\") pod \"collect-profiles-29412000-zt5qt\" (UID: \"4f46bfa4-9000-4c75-9e86-49671ca56ef0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.447569 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxdsc\" (UniqueName: \"kubernetes.io/projected/4f46bfa4-9000-4c75-9e86-49671ca56ef0-kube-api-access-gxdsc\") pod \"collect-profiles-29412000-zt5qt\" (UID: \"4f46bfa4-9000-4c75-9e86-49671ca56ef0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.448173 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9hmq\" (UniqueName: \"kubernetes.io/projected/8a653300-bd4c-4c3f-ad33-e102862155b1-kube-api-access-t9hmq\") pod \"certified-operators-27fdr\" (UID: \"8a653300-bd4c-4c3f-ad33-e102862155b1\") " pod="openshift-marketplace/certified-operators-27fdr" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.449661 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j97j9\" (UniqueName: \"kubernetes.io/projected/a5813daf-5020-40ff-9715-a2ce6abf39c3-kube-api-access-j97j9\") pod \"image-pruner-29412000-czw72\" (UID: \"a5813daf-5020-40ff-9715-a2ce6abf39c3\") " pod="openshift-image-registry/image-pruner-29412000-czw72" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.449919 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn592\" (UniqueName: \"kubernetes.io/projected/4c5b1235-88b1-4e71-b697-04c9f657382e-kube-api-access-pn592\") pod \"redhat-marketplace-r7bqm\" (UID: \"4c5b1235-88b1-4e71-b697-04c9f657382e\") " pod="openshift-marketplace/redhat-marketplace-r7bqm" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.456518 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv9zh\" (UniqueName: \"kubernetes.io/projected/33f76c53-6f7f-475a-a091-33fe0506eb7d-kube-api-access-nv9zh\") pod \"redhat-operators-fp8v6\" (UID: \"33f76c53-6f7f-475a-a091-33fe0506eb7d\") " pod="openshift-marketplace/redhat-operators-fp8v6" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.546255 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29412000-czw72" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.564798 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7bqm" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.573320 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fp8v6" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.584708 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27fdr" Dec 03 00:09:31 crc kubenswrapper[3561]: I1203 00:09:31.596633 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" Dec 03 00:09:39 crc kubenswrapper[3561]: I1203 00:09:39.995789 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 03 00:09:41 crc kubenswrapper[3561]: I1203 00:09:41.501683 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:09:41 crc kubenswrapper[3561]: I1203 00:09:41.501894 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:09:41 crc kubenswrapper[3561]: I1203 00:09:41.501937 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:09:41 crc kubenswrapper[3561]: I1203 00:09:41.501986 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:09:41 crc kubenswrapper[3561]: I1203 00:09:41.502023 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:09:49 crc kubenswrapper[3561]: I1203 00:09:49.904530 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"8a7f40eb1c5cb4e355dafa36fd0e1ceb3266ead57a120a7841620dba6da77eea"} Dec 03 00:09:49 crc kubenswrapper[3561]: I1203 00:09:49.978526 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 03 00:09:50 crc kubenswrapper[3561]: I1203 00:09:50.364121 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 03 00:09:50 crc kubenswrapper[3561]: I1203 00:09:50.911982 3561 generic.go:334] "Generic (PLEG): container finished" podID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerID="480f0970e9e9dc9b9af0dc4fbf13231ac94f2e6658d265a517c39d1ae9f0323c" exitCode=0 Dec 03 00:09:50 crc kubenswrapper[3561]: I1203 00:09:50.912051 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerDied","Data":"480f0970e9e9dc9b9af0dc4fbf13231ac94f2e6658d265a517c39d1ae9f0323c"} Dec 03 00:09:57 crc kubenswrapper[3561]: I1203 00:09:57.623135 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:09:57 crc kubenswrapper[3561]: I1203 00:09:57.623704 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:10:15 crc kubenswrapper[3561]: E1203 00:10:15.715086 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[registry-storage], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 03 00:10:23 crc kubenswrapper[3561]: I1203 00:10:23.637211 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fp8v6"] Dec 03 00:10:23 crc kubenswrapper[3561]: I1203 00:10:23.765219 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29412000-czw72"] Dec 03 00:10:23 crc kubenswrapper[3561]: I1203 00:10:23.915144 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-27fdr"] Dec 03 00:10:23 crc kubenswrapper[3561]: I1203 00:10:23.922178 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt"] Dec 03 00:10:25 crc kubenswrapper[3561]: I1203 00:10:25.111704 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"d9b86967d807ecc439da71b9911458db69c8d7b7d4d6c5578a201aaf721a46ba"} Dec 03 00:10:27 crc kubenswrapper[3561]: I1203 00:10:27.623458 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:10:27 crc kubenswrapper[3561]: I1203 00:10:27.623568 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:10:27 crc kubenswrapper[3561]: I1203 00:10:27.623606 3561 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:10:27 crc kubenswrapper[3561]: I1203 00:10:27.624293 3561 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"113805abfdc6c501aa825a452eb1d62ca3a6d97dc80e8b0884d3cb087f419251"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:10:27 crc kubenswrapper[3561]: I1203 00:10:27.624458 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://113805abfdc6c501aa825a452eb1d62ca3a6d97dc80e8b0884d3cb087f419251" gracePeriod=600 Dec 03 00:10:29 crc kubenswrapper[3561]: I1203 00:10:29.130570 3561 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="113805abfdc6c501aa825a452eb1d62ca3a6d97dc80e8b0884d3cb087f419251" exitCode=0 Dec 03 00:10:29 crc kubenswrapper[3561]: I1203 00:10:29.130634 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"113805abfdc6c501aa825a452eb1d62ca3a6d97dc80e8b0884d3cb087f419251"} Dec 03 00:10:29 crc kubenswrapper[3561]: W1203 00:10:29.343177 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33f76c53_6f7f_475a_a091_33fe0506eb7d.slice/crio-d7709596397993e22b949469e73382970e1c29449f279686043cc4e18e925c2b WatchSource:0}: Error finding container d7709596397993e22b949469e73382970e1c29449f279686043cc4e18e925c2b: Status 404 returned error can't find the container with id d7709596397993e22b949469e73382970e1c29449f279686043cc4e18e925c2b Dec 03 00:10:29 crc kubenswrapper[3561]: W1203 00:10:29.360237 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5813daf_5020_40ff_9715_a2ce6abf39c3.slice/crio-ea995d4f890e068e65b24d1fd74f83d8caacdb7298791c38e7e9801dbe2c0cc2 WatchSource:0}: Error finding container ea995d4f890e068e65b24d1fd74f83d8caacdb7298791c38e7e9801dbe2c0cc2: Status 404 returned error can't find the container with id ea995d4f890e068e65b24d1fd74f83d8caacdb7298791c38e7e9801dbe2c0cc2 Dec 03 00:10:29 crc kubenswrapper[3561]: I1203 00:10:29.663923 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:10:29 crc kubenswrapper[3561]: I1203 00:10:29.790218 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7bqm"] Dec 03 00:10:30 crc kubenswrapper[3561]: I1203 00:10:30.136816 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp8v6" event={"ID":"33f76c53-6f7f-475a-a091-33fe0506eb7d","Type":"ContainerStarted","Data":"d7709596397993e22b949469e73382970e1c29449f279686043cc4e18e925c2b"} Dec 03 00:10:30 crc kubenswrapper[3561]: I1203 00:10:30.139847 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27fdr" event={"ID":"8a653300-bd4c-4c3f-ad33-e102862155b1","Type":"ContainerStarted","Data":"4675ed633befdd382d6398bb368ad467a717c5cb6f8dee511116723dcaa17055"} Dec 03 00:10:30 crc kubenswrapper[3561]: I1203 00:10:30.140666 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29412000-czw72" event={"ID":"a5813daf-5020-40ff-9715-a2ce6abf39c3","Type":"ContainerStarted","Data":"ea995d4f890e068e65b24d1fd74f83d8caacdb7298791c38e7e9801dbe2c0cc2"} Dec 03 00:10:30 crc kubenswrapper[3561]: I1203 00:10:30.142025 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7bqm" event={"ID":"4c5b1235-88b1-4e71-b697-04c9f657382e","Type":"ContainerStarted","Data":"8b450404665d090a2f81b6a532d7fe67c5aa7793eb3a42d28e5dadc746b63f1f"} Dec 03 00:10:30 crc kubenswrapper[3561]: I1203 00:10:30.142989 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" event={"ID":"4f46bfa4-9000-4c75-9e86-49671ca56ef0","Type":"ContainerStarted","Data":"d80aaf8459ef2b426f4a8b0521e59cfe4f7d2a543d5a3c8241556d4eb51034fb"} Dec 03 00:10:31 crc kubenswrapper[3561]: I1203 00:10:31.150696 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"2c21e85cffcebc7d6aa31effa6be0d1df429d17fa6e02e3b7f827d74847cd594"} Dec 03 00:10:31 crc kubenswrapper[3561]: I1203 00:10:31.153868 3561 generic.go:334] "Generic (PLEG): container finished" podID="33f76c53-6f7f-475a-a091-33fe0506eb7d" containerID="ad1ff79002dbd6076e8e1d95d3f2703955c4ed5ed17d38410203505969a089da" exitCode=0 Dec 03 00:10:31 crc kubenswrapper[3561]: I1203 00:10:31.153965 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp8v6" event={"ID":"33f76c53-6f7f-475a-a091-33fe0506eb7d","Type":"ContainerDied","Data":"ad1ff79002dbd6076e8e1d95d3f2703955c4ed5ed17d38410203505969a089da"} Dec 03 00:10:31 crc kubenswrapper[3561]: I1203 00:10:31.156259 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29412000-czw72" event={"ID":"a5813daf-5020-40ff-9715-a2ce6abf39c3","Type":"ContainerStarted","Data":"3f1b679263efce6e3f60928be2de5e3763bbc695f55148dbb64b676393b29e1e"} Dec 03 00:10:31 crc kubenswrapper[3561]: I1203 00:10:31.165408 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"4fbbdf924a3627bed34fd7e11a72c19bb1ad9a6ed5a837fcf28bfd7e039e9582"} Dec 03 00:10:31 crc kubenswrapper[3561]: I1203 00:10:31.167522 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"15e97c832b1edd5118dd5b70cf73c62c293a622f94794b4b5fd4db37a2862e9f"} Dec 03 00:10:31 crc kubenswrapper[3561]: I1203 00:10:31.174487 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerStarted","Data":"1fceab1ba2ca09ca7066e4b06580db593a491e1ba4b4c912f385849e81144762"} Dec 03 00:10:31 crc kubenswrapper[3561]: I1203 00:10:31.175771 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerStarted","Data":"cbd23720d2dea398062aa1fba2e24a2065405b2e0a179701878af9d87dc6c355"} Dec 03 00:10:31 crc kubenswrapper[3561]: I1203 00:10:31.732879 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:10:31 crc kubenswrapper[3561]: I1203 00:10:31.778580 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:10:32 crc kubenswrapper[3561]: I1203 00:10:32.182977 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerStarted","Data":"04ab4d712bf5d45f4a45f70118247ead77cbc3da26bf85305041ca2e0534a9c9"} Dec 03 00:10:32 crc kubenswrapper[3561]: I1203 00:10:32.185516 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerStarted","Data":"80997635239b7fe2c92324491a3e1b8d8b722d29697af35f6b70d4a77e58ed4d"} Dec 03 00:10:32 crc kubenswrapper[3561]: I1203 00:10:32.187184 3561 generic.go:334] "Generic (PLEG): container finished" podID="8a653300-bd4c-4c3f-ad33-e102862155b1" containerID="7725147dc8dad348d8be96f15ea60404754980b6d64a60a8de48e7a62418684f" exitCode=0 Dec 03 00:10:32 crc kubenswrapper[3561]: I1203 00:10:32.187227 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27fdr" event={"ID":"8a653300-bd4c-4c3f-ad33-e102862155b1","Type":"ContainerDied","Data":"7725147dc8dad348d8be96f15ea60404754980b6d64a60a8de48e7a62418684f"} Dec 03 00:10:32 crc kubenswrapper[3561]: I1203 00:10:32.188503 3561 generic.go:334] "Generic (PLEG): container finished" podID="4c5b1235-88b1-4e71-b697-04c9f657382e" containerID="8f8aae87545c36a5555dde587bbc2ca28b50dd12454856df10b948877990599d" exitCode=0 Dec 03 00:10:32 crc kubenswrapper[3561]: I1203 00:10:32.188555 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7bqm" event={"ID":"4c5b1235-88b1-4e71-b697-04c9f657382e","Type":"ContainerDied","Data":"8f8aae87545c36a5555dde587bbc2ca28b50dd12454856df10b948877990599d"} Dec 03 00:10:32 crc kubenswrapper[3561]: I1203 00:10:32.190415 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" event={"ID":"4f46bfa4-9000-4c75-9e86-49671ca56ef0","Type":"ContainerStarted","Data":"fee0098b56ac859649d44a806070ca7bd1a97107a457ff85d69279f22d85da26"} Dec 03 00:10:32 crc kubenswrapper[3561]: I1203 00:10:32.191683 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:10:32 crc kubenswrapper[3561]: I1203 00:10:32.194413 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 03 00:10:34 crc kubenswrapper[3561]: I1203 00:10:34.201627 3561 generic.go:334] "Generic (PLEG): container finished" podID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerID="4fbbdf924a3627bed34fd7e11a72c19bb1ad9a6ed5a837fcf28bfd7e039e9582" exitCode=0 Dec 03 00:10:34 crc kubenswrapper[3561]: I1203 00:10:34.201738 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"4fbbdf924a3627bed34fd7e11a72c19bb1ad9a6ed5a837fcf28bfd7e039e9582"} Dec 03 00:10:34 crc kubenswrapper[3561]: I1203 00:10:34.283269 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" podStartSLOduration=198.283124645 podStartE2EDuration="3m18.283124645s" podCreationTimestamp="2025-12-03 00:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:10:34.280936326 +0000 UTC m=+233.061370604" watchObservedRunningTime="2025-12-03 00:10:34.283124645 +0000 UTC m=+233.063558903" Dec 03 00:10:34 crc kubenswrapper[3561]: I1203 00:10:34.415099 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29412000-czw72" podStartSLOduration=198.415013938 podStartE2EDuration="3m18.415013938s" podCreationTimestamp="2025-12-03 00:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:10:34.410206586 +0000 UTC m=+233.190640854" watchObservedRunningTime="2025-12-03 00:10:34.415013938 +0000 UTC m=+233.195448196" Dec 03 00:10:37 crc kubenswrapper[3561]: I1203 00:10:37.228239 3561 generic.go:334] "Generic (PLEG): container finished" podID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerID="cbd23720d2dea398062aa1fba2e24a2065405b2e0a179701878af9d87dc6c355" exitCode=0 Dec 03 00:10:37 crc kubenswrapper[3561]: I1203 00:10:37.228323 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerDied","Data":"cbd23720d2dea398062aa1fba2e24a2065405b2e0a179701878af9d87dc6c355"} Dec 03 00:10:38 crc kubenswrapper[3561]: I1203 00:10:38.254312 3561 generic.go:334] "Generic (PLEG): container finished" podID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerID="1fceab1ba2ca09ca7066e4b06580db593a491e1ba4b4c912f385849e81144762" exitCode=0 Dec 03 00:10:38 crc kubenswrapper[3561]: I1203 00:10:38.254390 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerDied","Data":"1fceab1ba2ca09ca7066e4b06580db593a491e1ba4b4c912f385849e81144762"} Dec 03 00:10:38 crc kubenswrapper[3561]: I1203 00:10:38.258150 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27fdr" event={"ID":"8a653300-bd4c-4c3f-ad33-e102862155b1","Type":"ContainerStarted","Data":"8abd782bf27461d299ab5cb2790a03aedcc7dbb3bd5ce271f8cccb987d9ceb43"} Dec 03 00:10:38 crc kubenswrapper[3561]: I1203 00:10:38.263499 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7bqm" event={"ID":"4c5b1235-88b1-4e71-b697-04c9f657382e","Type":"ContainerStarted","Data":"fa5bb0d8d10d989382f50a724184f5b4effd9047bfb540ec6167ae77d6370972"} Dec 03 00:10:38 crc kubenswrapper[3561]: I1203 00:10:38.265510 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"b0bc633e63b404e585c1ec38598cf53a0d07c97c53029d331bfd44596af69f7f"} Dec 03 00:10:38 crc kubenswrapper[3561]: I1203 00:10:38.266887 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp8v6" event={"ID":"33f76c53-6f7f-475a-a091-33fe0506eb7d","Type":"ContainerStarted","Data":"fccada18e53085fb519ff3a89c58c1afd71555bdfd4be48110b07d6c686e95aa"} Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.033500 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-27fdr"] Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.037151 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7287f"] Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.046495 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8jhz6"] Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.046781 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-content" containerID="cri-o://04ab4d712bf5d45f4a45f70118247ead77cbc3da26bf85305041ca2e0534a9c9" gracePeriod=30 Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.057650 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.073177 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-f9xdt"] Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.073613 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" containerID="cri-o://eee88287a9c1a25153257c14c4247f5c9a3b427e89cee85e04ea2b9abdab7d71" gracePeriod=30 Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.081960 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8s8pc"] Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.098754 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7bqm"] Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.108441 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4jkp"] Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.110780 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-content" containerID="cri-o://80997635239b7fe2c92324491a3e1b8d8b722d29697af35f6b70d4a77e58ed4d" gracePeriod=30 Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.116262 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-xmpf5"] Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.116391 3561 topology_manager.go:215] "Topology Admit Handler" podUID="054d1742-0d77-4532-8193-ddbc28411371" podNamespace="openshift-marketplace" podName="marketplace-operator-8b455464d-xmpf5" Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.117085 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-xmpf5" Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.119189 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-b4zbk" Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.132991 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fp8v6"] Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.140204 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-xmpf5"] Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.271320 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" containerID="cri-o://b0bc633e63b404e585c1ec38598cf53a0d07c97c53029d331bfd44596af69f7f" gracePeriod=30 Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.272057 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-27fdr" podUID="8a653300-bd4c-4c3f-ad33-e102862155b1" containerName="extract-content" containerID="cri-o://8abd782bf27461d299ab5cb2790a03aedcc7dbb3bd5ce271f8cccb987d9ceb43" gracePeriod=30 Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.301210 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/054d1742-0d77-4532-8193-ddbc28411371-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-xmpf5\" (UID: \"054d1742-0d77-4532-8193-ddbc28411371\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xmpf5" Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.301274 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/054d1742-0d77-4532-8193-ddbc28411371-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-xmpf5\" (UID: \"054d1742-0d77-4532-8193-ddbc28411371\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xmpf5" Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.301705 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6l7p\" (UniqueName: \"kubernetes.io/projected/054d1742-0d77-4532-8193-ddbc28411371-kube-api-access-m6l7p\") pod \"marketplace-operator-8b455464d-xmpf5\" (UID: \"054d1742-0d77-4532-8193-ddbc28411371\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xmpf5" Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.403299 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/054d1742-0d77-4532-8193-ddbc28411371-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-xmpf5\" (UID: \"054d1742-0d77-4532-8193-ddbc28411371\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xmpf5" Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.403386 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/054d1742-0d77-4532-8193-ddbc28411371-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-xmpf5\" (UID: \"054d1742-0d77-4532-8193-ddbc28411371\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xmpf5" Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.403525 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-m6l7p\" (UniqueName: \"kubernetes.io/projected/054d1742-0d77-4532-8193-ddbc28411371-kube-api-access-m6l7p\") pod \"marketplace-operator-8b455464d-xmpf5\" (UID: \"054d1742-0d77-4532-8193-ddbc28411371\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xmpf5" Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.406121 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/054d1742-0d77-4532-8193-ddbc28411371-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-xmpf5\" (UID: \"054d1742-0d77-4532-8193-ddbc28411371\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xmpf5" Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.492135 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/054d1742-0d77-4532-8193-ddbc28411371-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-xmpf5\" (UID: \"054d1742-0d77-4532-8193-ddbc28411371\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xmpf5" Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.498621 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6l7p\" (UniqueName: \"kubernetes.io/projected/054d1742-0d77-4532-8193-ddbc28411371-kube-api-access-m6l7p\") pod \"marketplace-operator-8b455464d-xmpf5\" (UID: \"054d1742-0d77-4532-8193-ddbc28411371\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xmpf5" Dec 03 00:10:39 crc kubenswrapper[3561]: I1203 00:10:39.588972 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-xmpf5" Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.283834 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-xmpf5"] Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.285497 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jhz6_3f4dca86-e6ee-4ec9-8324-86aff960225e/extract-content/1.log" Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.286014 3561 generic.go:334] "Generic (PLEG): container finished" podID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerID="04ab4d712bf5d45f4a45f70118247ead77cbc3da26bf85305041ca2e0534a9c9" exitCode=2 Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.286051 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerDied","Data":"04ab4d712bf5d45f4a45f70118247ead77cbc3da26bf85305041ca2e0534a9c9"} Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.288139 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f4jkp_4092a9f8-5acc-4932-9e90-ef962eeb301a/extract-content/1.log" Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.288554 3561 generic.go:334] "Generic (PLEG): container finished" podID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerID="80997635239b7fe2c92324491a3e1b8d8b722d29697af35f6b70d4a77e58ed4d" exitCode=2 Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.288596 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerDied","Data":"80997635239b7fe2c92324491a3e1b8d8b722d29697af35f6b70d4a77e58ed4d"} Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.290810 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-27fdr_8a653300-bd4c-4c3f-ad33-e102862155b1/extract-content/0.log" Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.291087 3561 generic.go:334] "Generic (PLEG): container finished" podID="8a653300-bd4c-4c3f-ad33-e102862155b1" containerID="8abd782bf27461d299ab5cb2790a03aedcc7dbb3bd5ce271f8cccb987d9ceb43" exitCode=2 Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.291127 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27fdr" event={"ID":"8a653300-bd4c-4c3f-ad33-e102862155b1","Type":"ContainerDied","Data":"8abd782bf27461d299ab5cb2790a03aedcc7dbb3bd5ce271f8cccb987d9ceb43"} Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.292923 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r7bqm" podUID="4c5b1235-88b1-4e71-b697-04c9f657382e" containerName="extract-content" containerID="cri-o://fa5bb0d8d10d989382f50a724184f5b4effd9047bfb540ec6167ae77d6370972" gracePeriod=30 Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.293205 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerStarted","Data":"f3c07f7d18d20a694da4892411c7b4a1fb2cbfcbcc2fdaa23ce16ecb4f693f0d"} Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.293317 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fp8v6" podUID="33f76c53-6f7f-475a-a091-33fe0506eb7d" containerName="extract-content" containerID="cri-o://fccada18e53085fb519ff3a89c58c1afd71555bdfd4be48110b07d6c686e95aa" gracePeriod=30 Dec 03 00:10:40 crc kubenswrapper[3561]: W1203 00:10:40.296064 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod054d1742_0d77_4532_8193_ddbc28411371.slice/crio-556921de9acbc9ac9f0b35b807b5c847835ec9335337f68b9e6f3bf2aebc6708 WatchSource:0}: Error finding container 556921de9acbc9ac9f0b35b807b5c847835ec9335337f68b9e6f3bf2aebc6708: Status 404 returned error can't find the container with id 556921de9acbc9ac9f0b35b807b5c847835ec9335337f68b9e6f3bf2aebc6708 Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.372668 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.731377 3561 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.731490 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.984665 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f4jkp_4092a9f8-5acc-4932-9e90-ef962eeb301a/extract-content/1.log" Dec 03 00:10:40 crc kubenswrapper[3561]: I1203 00:10:40.984989 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.028707 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") pod \"4092a9f8-5acc-4932-9e90-ef962eeb301a\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.028764 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"4092a9f8-5acc-4932-9e90-ef962eeb301a\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.028804 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") pod \"4092a9f8-5acc-4932-9e90-ef962eeb301a\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.050975 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-27fdr_8a653300-bd4c-4c3f-ad33-e102862155b1/extract-content/0.log" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.051355 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27fdr" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.056933 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jhz6_3f4dca86-e6ee-4ec9-8324-86aff960225e/extract-content/1.log" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.057251 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.060888 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb" (OuterVolumeSpecName: "kube-api-access-ptdrb") pod "4092a9f8-5acc-4932-9e90-ef962eeb301a" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a"). InnerVolumeSpecName "kube-api-access-ptdrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.129327 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9hmq\" (UniqueName: \"kubernetes.io/projected/8a653300-bd4c-4c3f-ad33-e102862155b1-kube-api-access-t9hmq\") pod \"8a653300-bd4c-4c3f-ad33-e102862155b1\" (UID: \"8a653300-bd4c-4c3f-ad33-e102862155b1\") " Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.129387 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"3f4dca86-e6ee-4ec9-8324-86aff960225e\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.129410 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") pod \"3f4dca86-e6ee-4ec9-8324-86aff960225e\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.129431 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a653300-bd4c-4c3f-ad33-e102862155b1-utilities\") pod \"8a653300-bd4c-4c3f-ad33-e102862155b1\" (UID: \"8a653300-bd4c-4c3f-ad33-e102862155b1\") " Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.129456 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") pod \"3f4dca86-e6ee-4ec9-8324-86aff960225e\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.129484 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a653300-bd4c-4c3f-ad33-e102862155b1-catalog-content\") pod \"8a653300-bd4c-4c3f-ad33-e102862155b1\" (UID: \"8a653300-bd4c-4c3f-ad33-e102862155b1\") " Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.129653 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.134573 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a653300-bd4c-4c3f-ad33-e102862155b1-kube-api-access-t9hmq" (OuterVolumeSpecName: "kube-api-access-t9hmq") pod "8a653300-bd4c-4c3f-ad33-e102862155b1" (UID: "8a653300-bd4c-4c3f-ad33-e102862155b1"). InnerVolumeSpecName "kube-api-access-t9hmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.134672 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a653300-bd4c-4c3f-ad33-e102862155b1-utilities" (OuterVolumeSpecName: "utilities") pod "8a653300-bd4c-4c3f-ad33-e102862155b1" (UID: "8a653300-bd4c-4c3f-ad33-e102862155b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.134767 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities" (OuterVolumeSpecName: "utilities") pod "3f4dca86-e6ee-4ec9-8324-86aff960225e" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.135472 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt" (OuterVolumeSpecName: "kube-api-access-n6sqt") pod "3f4dca86-e6ee-4ec9-8324-86aff960225e" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e"). InnerVolumeSpecName "kube-api-access-n6sqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.230925 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a653300-bd4c-4c3f-ad33-e102862155b1-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.230990 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.231007 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-t9hmq\" (UniqueName: \"kubernetes.io/projected/8a653300-bd4c-4c3f-ad33-e102862155b1-kube-api-access-t9hmq\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.231030 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.235475 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities" (OuterVolumeSpecName: "utilities") pod "4092a9f8-5acc-4932-9e90-ef962eeb301a" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.308378 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerStarted","Data":"bf1fedafbf0378ee1291861d7879fb0a363a1feda9387b73caea0947e3accd63"} Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.310029 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" containerID="cri-o://bf1fedafbf0378ee1291861d7879fb0a363a1feda9387b73caea0947e3accd63" gracePeriod=30 Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.311847 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-27fdr_8a653300-bd4c-4c3f-ad33-e102862155b1/extract-content/0.log" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.313713 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27fdr" event={"ID":"8a653300-bd4c-4c3f-ad33-e102862155b1","Type":"ContainerDied","Data":"4675ed633befdd382d6398bb368ad467a717c5cb6f8dee511116723dcaa17055"} Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.313800 3561 scope.go:117] "RemoveContainer" containerID="8abd782bf27461d299ab5cb2790a03aedcc7dbb3bd5ce271f8cccb987d9ceb43" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.316719 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27fdr" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.325813 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-xmpf5" event={"ID":"054d1742-0d77-4532-8193-ddbc28411371","Type":"ContainerStarted","Data":"909ca6011602a2fed5229cda077f44002ae58fe36751acf5d68b02109f4ff393"} Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.325862 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-xmpf5" event={"ID":"054d1742-0d77-4532-8193-ddbc28411371","Type":"ContainerStarted","Data":"556921de9acbc9ac9f0b35b807b5c847835ec9335337f68b9e6f3bf2aebc6708"} Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.332434 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.339063 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7bqm_4c5b1235-88b1-4e71-b697-04c9f657382e/extract-content/0.log" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.339954 3561 generic.go:334] "Generic (PLEG): container finished" podID="4c5b1235-88b1-4e71-b697-04c9f657382e" containerID="fa5bb0d8d10d989382f50a724184f5b4effd9047bfb540ec6167ae77d6370972" exitCode=2 Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.340022 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7bqm" event={"ID":"4c5b1235-88b1-4e71-b697-04c9f657382e","Type":"ContainerDied","Data":"fa5bb0d8d10d989382f50a724184f5b4effd9047bfb540ec6167ae77d6370972"} Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.341786 3561 generic.go:334] "Generic (PLEG): container finished" podID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerID="eee88287a9c1a25153257c14c4247f5c9a3b427e89cee85e04ea2b9abdab7d71" exitCode=0 Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.341835 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"eee88287a9c1a25153257c14c4247f5c9a3b427e89cee85e04ea2b9abdab7d71"} Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.343278 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jhz6_3f4dca86-e6ee-4ec9-8324-86aff960225e/extract-content/1.log" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.345244 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerDied","Data":"8c10a5ea03afba93ab10fb66f2842e3202e3bbbce3f0371261d16fb6def96374"} Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.346648 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.357893 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f4jkp_4092a9f8-5acc-4932-9e90-ef962eeb301a/extract-content/1.log" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.358474 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerDied","Data":"ed9a703265a24c18752b487cae87144ce0098ccd2888efb9fe8f1cef8a18bc46"} Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.358609 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.361706 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fp8v6_33f76c53-6f7f-475a-a091-33fe0506eb7d/extract-content/0.log" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.363958 3561 generic.go:334] "Generic (PLEG): container finished" podID="33f76c53-6f7f-475a-a091-33fe0506eb7d" containerID="fccada18e53085fb519ff3a89c58c1afd71555bdfd4be48110b07d6c686e95aa" exitCode=2 Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.364061 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp8v6" event={"ID":"33f76c53-6f7f-475a-a091-33fe0506eb7d","Type":"ContainerDied","Data":"fccada18e53085fb519ff3a89c58c1afd71555bdfd4be48110b07d6c686e95aa"} Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.364208 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="registry-server" containerID="cri-o://f3c07f7d18d20a694da4892411c7b4a1fb2cbfcbcc2fdaa23ce16ecb4f693f0d" gracePeriod=30 Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.394806 3561 scope.go:117] "RemoveContainer" containerID="7725147dc8dad348d8be96f15ea60404754980b6d64a60a8de48e7a62418684f" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.401488 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-8b455464d-xmpf5" podStartSLOduration=2.401387783 podStartE2EDuration="2.401387783s" podCreationTimestamp="2025-12-03 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:10:41.395766315 +0000 UTC m=+240.176200573" watchObservedRunningTime="2025-12-03 00:10:41.401387783 +0000 UTC m=+240.181822041" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.431127 3561 scope.go:117] "RemoveContainer" containerID="04ab4d712bf5d45f4a45f70118247ead77cbc3da26bf85305041ca2e0534a9c9" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.496586 3561 scope.go:117] "RemoveContainer" containerID="07f8145e07ed50c07049a638fbe8c9f4f3c7df5eea63eddc880fd6f9033f6fe0" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.502724 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.502847 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.502888 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.502918 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.502952 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.531962 3561 scope.go:117] "RemoveContainer" containerID="80997635239b7fe2c92324491a3e1b8d8b722d29697af35f6b70d4a77e58ed4d" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.605038 3561 scope.go:117] "RemoveContainer" containerID="2a0bbf85d1997f9d176a5cd905decd6099f2f956127e19bc3e046a234701588e" Dec 03 00:10:41 crc kubenswrapper[3561]: E1203 00:10:41.633887 3561 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9afd6710a8abfab3e85dd15842d58f372076fcfaf5c9440460a067e2bdcbadd8/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9afd6710a8abfab3e85dd15842d58f372076fcfaf5c9440460a067e2bdcbadd8/diff: no such file or directory, extraDiskErr: Dec 03 00:10:41 crc kubenswrapper[3561]: E1203 00:10:41.656363 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6\": container with ID starting with 3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6 not found: ID does not exist" containerID="3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.656437 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6" err="rpc error: code = NotFound desc = could not find container \"3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6\": container with ID starting with 3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6 not found: ID does not exist" Dec 03 00:10:41 crc kubenswrapper[3561]: E1203 00:10:41.656905 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4\": container with ID starting with 96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4 not found: ID does not exist" containerID="96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.656970 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4" err="rpc error: code = NotFound desc = could not find container \"96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4\": container with ID starting with 96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4 not found: ID does not exist" Dec 03 00:10:41 crc kubenswrapper[3561]: E1203 00:10:41.657336 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636\": container with ID starting with 936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636 not found: ID does not exist" containerID="936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.657368 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636" err="rpc error: code = NotFound desc = could not find container \"936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636\": container with ID starting with 936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636 not found: ID does not exist" Dec 03 00:10:41 crc kubenswrapper[3561]: E1203 00:10:41.661359 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f\": container with ID starting with 319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f not found: ID does not exist" containerID="319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.661424 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f" err="rpc error: code = NotFound desc = could not find container \"319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f\": container with ID starting with 319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f not found: ID does not exist" Dec 03 00:10:41 crc kubenswrapper[3561]: E1203 00:10:41.661898 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8\": container with ID starting with 30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8 not found: ID does not exist" containerID="30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.661929 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8" err="rpc error: code = NotFound desc = could not find container \"30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8\": container with ID starting with 30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8 not found: ID does not exist" Dec 03 00:10:41 crc kubenswrapper[3561]: E1203 00:10:41.662214 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8\": container with ID starting with bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8 not found: ID does not exist" containerID="bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8" Dec 03 00:10:41 crc kubenswrapper[3561]: I1203 00:10:41.662253 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8" err="rpc error: code = NotFound desc = could not find container \"bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8\": container with ID starting with bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8 not found: ID does not exist" Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.089184 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fp8v6_33f76c53-6f7f-475a-a091-33fe0506eb7d/extract-content/0.log" Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.090965 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fp8v6" Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.144553 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33f76c53-6f7f-475a-a091-33fe0506eb7d-catalog-content\") pod \"33f76c53-6f7f-475a-a091-33fe0506eb7d\" (UID: \"33f76c53-6f7f-475a-a091-33fe0506eb7d\") " Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.144907 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33f76c53-6f7f-475a-a091-33fe0506eb7d-utilities\") pod \"33f76c53-6f7f-475a-a091-33fe0506eb7d\" (UID: \"33f76c53-6f7f-475a-a091-33fe0506eb7d\") " Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.145008 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv9zh\" (UniqueName: \"kubernetes.io/projected/33f76c53-6f7f-475a-a091-33fe0506eb7d-kube-api-access-nv9zh\") pod \"33f76c53-6f7f-475a-a091-33fe0506eb7d\" (UID: \"33f76c53-6f7f-475a-a091-33fe0506eb7d\") " Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.145770 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33f76c53-6f7f-475a-a091-33fe0506eb7d-utilities" (OuterVolumeSpecName: "utilities") pod "33f76c53-6f7f-475a-a091-33fe0506eb7d" (UID: "33f76c53-6f7f-475a-a091-33fe0506eb7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.150067 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f76c53-6f7f-475a-a091-33fe0506eb7d-kube-api-access-nv9zh" (OuterVolumeSpecName: "kube-api-access-nv9zh") pod "33f76c53-6f7f-475a-a091-33fe0506eb7d" (UID: "33f76c53-6f7f-475a-a091-33fe0506eb7d"). InnerVolumeSpecName "kube-api-access-nv9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.246491 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nv9zh\" (UniqueName: \"kubernetes.io/projected/33f76c53-6f7f-475a-a091-33fe0506eb7d-kube-api-access-nv9zh\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.246570 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33f76c53-6f7f-475a-a091-33fe0506eb7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.371803 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fp8v6_33f76c53-6f7f-475a-a091-33fe0506eb7d/extract-content/0.log" Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.372396 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fp8v6" Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.372450 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp8v6" event={"ID":"33f76c53-6f7f-475a-a091-33fe0506eb7d","Type":"ContainerDied","Data":"d7709596397993e22b949469e73382970e1c29449f279686043cc4e18e925c2b"} Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.372522 3561 scope.go:117] "RemoveContainer" containerID="fccada18e53085fb519ff3a89c58c1afd71555bdfd4be48110b07d6c686e95aa" Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.374561 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7287f_887d596e-c519-4bfa-af90-3edd9e1b2f0f/registry-server/1.log" Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.375292 3561 generic.go:334] "Generic (PLEG): container finished" podID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerID="bf1fedafbf0378ee1291861d7879fb0a363a1feda9387b73caea0947e3accd63" exitCode=2 Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.375428 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerDied","Data":"bf1fedafbf0378ee1291861d7879fb0a363a1feda9387b73caea0947e3accd63"} Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.376599 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-xmpf5" Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.385576 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-8b455464d-xmpf5" Dec 03 00:10:42 crc kubenswrapper[3561]: I1203 00:10:42.414530 3561 scope.go:117] "RemoveContainer" containerID="ad1ff79002dbd6076e8e1d95d3f2703955c4ed5ed17d38410203505969a089da" Dec 03 00:10:43 crc kubenswrapper[3561]: I1203 00:10:43.807852 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7bqm_4c5b1235-88b1-4e71-b697-04c9f657382e/extract-content/0.log" Dec 03 00:10:43 crc kubenswrapper[3561]: I1203 00:10:43.809663 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7bqm" Dec 03 00:10:43 crc kubenswrapper[3561]: I1203 00:10:43.977198 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn592\" (UniqueName: \"kubernetes.io/projected/4c5b1235-88b1-4e71-b697-04c9f657382e-kube-api-access-pn592\") pod \"4c5b1235-88b1-4e71-b697-04c9f657382e\" (UID: \"4c5b1235-88b1-4e71-b697-04c9f657382e\") " Dec 03 00:10:43 crc kubenswrapper[3561]: I1203 00:10:43.977300 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c5b1235-88b1-4e71-b697-04c9f657382e-utilities\") pod \"4c5b1235-88b1-4e71-b697-04c9f657382e\" (UID: \"4c5b1235-88b1-4e71-b697-04c9f657382e\") " Dec 03 00:10:43 crc kubenswrapper[3561]: I1203 00:10:43.977452 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c5b1235-88b1-4e71-b697-04c9f657382e-catalog-content\") pod \"4c5b1235-88b1-4e71-b697-04c9f657382e\" (UID: \"4c5b1235-88b1-4e71-b697-04c9f657382e\") " Dec 03 00:10:43 crc kubenswrapper[3561]: I1203 00:10:43.979628 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c5b1235-88b1-4e71-b697-04c9f657382e-utilities" (OuterVolumeSpecName: "utilities") pod "4c5b1235-88b1-4e71-b697-04c9f657382e" (UID: "4c5b1235-88b1-4e71-b697-04c9f657382e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:43 crc kubenswrapper[3561]: I1203 00:10:43.983697 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c5b1235-88b1-4e71-b697-04c9f657382e-kube-api-access-pn592" (OuterVolumeSpecName: "kube-api-access-pn592") pod "4c5b1235-88b1-4e71-b697-04c9f657382e" (UID: "4c5b1235-88b1-4e71-b697-04c9f657382e"). InnerVolumeSpecName "kube-api-access-pn592". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.079239 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pn592\" (UniqueName: \"kubernetes.io/projected/4c5b1235-88b1-4e71-b697-04c9f657382e-kube-api-access-pn592\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.079273 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c5b1235-88b1-4e71-b697-04c9f657382e-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.393582 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7bqm_4c5b1235-88b1-4e71-b697-04c9f657382e/extract-content/0.log" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.394681 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7bqm" event={"ID":"4c5b1235-88b1-4e71-b697-04c9f657382e","Type":"ContainerDied","Data":"8b450404665d090a2f81b6a532d7fe67c5aa7793eb3a42d28e5dadc746b63f1f"} Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.394733 3561 scope.go:117] "RemoveContainer" containerID="fa5bb0d8d10d989382f50a724184f5b4effd9047bfb540ec6167ae77d6370972" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.394825 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7bqm" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.437875 3561 scope.go:117] "RemoveContainer" containerID="8f8aae87545c36a5555dde587bbc2ca28b50dd12454856df10b948877990599d" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.702064 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7287f_887d596e-c519-4bfa-af90-3edd9e1b2f0f/registry-server/1.log" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.702741 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.709499 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.890256 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") pod \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.890347 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.890447 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") pod \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.890501 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.890531 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.890610 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.891153 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities" (OuterVolumeSpecName: "utilities") pod "887d596e-c519-4bfa-af90-3edd9e1b2f0f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.895039 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3482be94-0cdb-4e2a-889b-e5fac59fdbf5" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.895461 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3482be94-0cdb-4e2a-889b-e5fac59fdbf5" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.895497 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg" (OuterVolumeSpecName: "kube-api-access-rg2zg") pod "3482be94-0cdb-4e2a-889b-e5fac59fdbf5" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5"). InnerVolumeSpecName "kube-api-access-rg2zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.896244 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5" (OuterVolumeSpecName: "kube-api-access-ncrf5") pod "887d596e-c519-4bfa-af90-3edd9e1b2f0f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f"). InnerVolumeSpecName "kube-api-access-ncrf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.992004 3561 reconciler_common.go:300] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.992105 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.992136 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.992165 3561 reconciler_common.go:300] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:44 crc kubenswrapper[3561]: I1203 00:10:44.992192 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:45 crc kubenswrapper[3561]: I1203 00:10:45.402269 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7287f_887d596e-c519-4bfa-af90-3edd9e1b2f0f/registry-server/1.log" Dec 03 00:10:45 crc kubenswrapper[3561]: I1203 00:10:45.403941 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 03 00:10:45 crc kubenswrapper[3561]: I1203 00:10:45.403959 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerDied","Data":"2f74053c54081780d15efae991c763370058b654093dd19dbb6d5e8da85e6070"} Dec 03 00:10:45 crc kubenswrapper[3561]: I1203 00:10:45.404035 3561 scope.go:117] "RemoveContainer" containerID="bf1fedafbf0378ee1291861d7879fb0a363a1feda9387b73caea0947e3accd63" Dec 03 00:10:45 crc kubenswrapper[3561]: I1203 00:10:45.406771 3561 generic.go:334] "Generic (PLEG): container finished" podID="4f46bfa4-9000-4c75-9e86-49671ca56ef0" containerID="fee0098b56ac859649d44a806070ca7bd1a97107a457ff85d69279f22d85da26" exitCode=0 Dec 03 00:10:45 crc kubenswrapper[3561]: I1203 00:10:45.406841 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" event={"ID":"4f46bfa4-9000-4c75-9e86-49671ca56ef0","Type":"ContainerDied","Data":"fee0098b56ac859649d44a806070ca7bd1a97107a457ff85d69279f22d85da26"} Dec 03 00:10:45 crc kubenswrapper[3561]: I1203 00:10:45.412757 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sdddl_fc9c9ba0-fcbb-4e78-8cf5-a059ec435760/registry-server/0.log" Dec 03 00:10:45 crc kubenswrapper[3561]: I1203 00:10:45.414084 3561 generic.go:334] "Generic (PLEG): container finished" podID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerID="f3c07f7d18d20a694da4892411c7b4a1fb2cbfcbcc2fdaa23ce16ecb4f693f0d" exitCode=2 Dec 03 00:10:45 crc kubenswrapper[3561]: I1203 00:10:45.414202 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerDied","Data":"f3c07f7d18d20a694da4892411c7b4a1fb2cbfcbcc2fdaa23ce16ecb4f693f0d"} Dec 03 00:10:45 crc kubenswrapper[3561]: I1203 00:10:45.417745 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"0534e0ba6cf94c4bb43379fbdd4c7372ee70f2f46904c094e0b72512d0356918"} Dec 03 00:10:45 crc kubenswrapper[3561]: I1203 00:10:45.417862 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 03 00:10:45 crc kubenswrapper[3561]: I1203 00:10:45.440973 3561 scope.go:117] "RemoveContainer" containerID="1fceab1ba2ca09ca7066e4b06580db593a491e1ba4b4c912f385849e81144762" Dec 03 00:10:45 crc kubenswrapper[3561]: I1203 00:10:45.487962 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-f9xdt"] Dec 03 00:10:45 crc kubenswrapper[3561]: I1203 00:10:45.492420 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-f9xdt"] Dec 03 00:10:45 crc kubenswrapper[3561]: I1203 00:10:45.494923 3561 scope.go:117] "RemoveContainer" containerID="fcbc5ae6af618f2983302ff7429b16225f583abd4ac51dee42b8005898a41f08" Dec 03 00:10:45 crc kubenswrapper[3561]: I1203 00:10:45.519663 3561 scope.go:117] "RemoveContainer" containerID="eee88287a9c1a25153257c14c4247f5c9a3b427e89cee85e04ea2b9abdab7d71" Dec 03 00:10:45 crc kubenswrapper[3561]: I1203 00:10:45.670553 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" path="/var/lib/kubelet/pods/3482be94-0cdb-4e2a-889b-e5fac59fdbf5/volumes" Dec 03 00:10:46 crc kubenswrapper[3561]: I1203 00:10:46.659078 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" Dec 03 00:10:46 crc kubenswrapper[3561]: I1203 00:10:46.738845 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f46bfa4-9000-4c75-9e86-49671ca56ef0-secret-volume\") pod \"4f46bfa4-9000-4c75-9e86-49671ca56ef0\" (UID: \"4f46bfa4-9000-4c75-9e86-49671ca56ef0\") " Dec 03 00:10:46 crc kubenswrapper[3561]: I1203 00:10:46.738898 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f46bfa4-9000-4c75-9e86-49671ca56ef0-config-volume\") pod \"4f46bfa4-9000-4c75-9e86-49671ca56ef0\" (UID: \"4f46bfa4-9000-4c75-9e86-49671ca56ef0\") " Dec 03 00:10:46 crc kubenswrapper[3561]: I1203 00:10:46.738940 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxdsc\" (UniqueName: \"kubernetes.io/projected/4f46bfa4-9000-4c75-9e86-49671ca56ef0-kube-api-access-gxdsc\") pod \"4f46bfa4-9000-4c75-9e86-49671ca56ef0\" (UID: \"4f46bfa4-9000-4c75-9e86-49671ca56ef0\") " Dec 03 00:10:46 crc kubenswrapper[3561]: I1203 00:10:46.740184 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f46bfa4-9000-4c75-9e86-49671ca56ef0-config-volume" (OuterVolumeSpecName: "config-volume") pod "4f46bfa4-9000-4c75-9e86-49671ca56ef0" (UID: "4f46bfa4-9000-4c75-9e86-49671ca56ef0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:10:46 crc kubenswrapper[3561]: I1203 00:10:46.744423 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f46bfa4-9000-4c75-9e86-49671ca56ef0-kube-api-access-gxdsc" (OuterVolumeSpecName: "kube-api-access-gxdsc") pod "4f46bfa4-9000-4c75-9e86-49671ca56ef0" (UID: "4f46bfa4-9000-4c75-9e86-49671ca56ef0"). InnerVolumeSpecName "kube-api-access-gxdsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:10:46 crc kubenswrapper[3561]: I1203 00:10:46.744710 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f46bfa4-9000-4c75-9e86-49671ca56ef0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4f46bfa4-9000-4c75-9e86-49671ca56ef0" (UID: "4f46bfa4-9000-4c75-9e86-49671ca56ef0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:10:46 crc kubenswrapper[3561]: I1203 00:10:46.840990 3561 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f46bfa4-9000-4c75-9e86-49671ca56ef0-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:46 crc kubenswrapper[3561]: I1203 00:10:46.841045 3561 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f46bfa4-9000-4c75-9e86-49671ca56ef0-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:46 crc kubenswrapper[3561]: I1203 00:10:46.841071 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gxdsc\" (UniqueName: \"kubernetes.io/projected/4f46bfa4-9000-4c75-9e86-49671ca56ef0-kube-api-access-gxdsc\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:47 crc kubenswrapper[3561]: I1203 00:10:47.433857 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" event={"ID":"4f46bfa4-9000-4c75-9e86-49671ca56ef0","Type":"ContainerDied","Data":"d80aaf8459ef2b426f4a8b0521e59cfe4f7d2a543d5a3c8241556d4eb51034fb"} Dec 03 00:10:47 crc kubenswrapper[3561]: I1203 00:10:47.433893 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d80aaf8459ef2b426f4a8b0521e59cfe4f7d2a543d5a3c8241556d4eb51034fb" Dec 03 00:10:47 crc kubenswrapper[3561]: I1203 00:10:47.433962 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt" Dec 03 00:10:47 crc kubenswrapper[3561]: I1203 00:10:47.524948 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2"] Dec 03 00:10:47 crc kubenswrapper[3561]: I1203 00:10:47.529032 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2"] Dec 03 00:10:47 crc kubenswrapper[3561]: I1203 00:10:47.671998 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" path="/var/lib/kubelet/pods/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27/volumes" Dec 03 00:10:48 crc kubenswrapper[3561]: I1203 00:10:48.223347 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sdddl_fc9c9ba0-fcbb-4e78-8cf5-a059ec435760/registry-server/0.log" Dec 03 00:10:48 crc kubenswrapper[3561]: I1203 00:10:48.224411 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:10:48 crc kubenswrapper[3561]: I1203 00:10:48.259454 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " Dec 03 00:10:48 crc kubenswrapper[3561]: I1203 00:10:48.259643 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " Dec 03 00:10:48 crc kubenswrapper[3561]: I1203 00:10:48.259697 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " Dec 03 00:10:48 crc kubenswrapper[3561]: I1203 00:10:48.261058 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities" (OuterVolumeSpecName: "utilities") pod "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:48 crc kubenswrapper[3561]: I1203 00:10:48.261920 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Dec 03 00:10:48 crc kubenswrapper[3561]: I1203 00:10:48.267302 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt" (OuterVolumeSpecName: "kube-api-access-9p8gt") pod "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760"). InnerVolumeSpecName "kube-api-access-9p8gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:10:48 crc kubenswrapper[3561]: I1203 00:10:48.360749 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:48 crc kubenswrapper[3561]: I1203 00:10:48.360788 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:48 crc kubenswrapper[3561]: I1203 00:10:48.441836 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sdddl_fc9c9ba0-fcbb-4e78-8cf5-a059ec435760/registry-server/0.log" Dec 03 00:10:48 crc kubenswrapper[3561]: I1203 00:10:48.442896 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerDied","Data":"40b5d8f8a890f6c7a5c368b69547ed30002bca11556a8bea85db754e5aa9321a"} Dec 03 00:10:48 crc kubenswrapper[3561]: I1203 00:10:48.442943 3561 scope.go:117] "RemoveContainer" containerID="f3c07f7d18d20a694da4892411c7b4a1fb2cbfcbcc2fdaa23ce16ecb4f693f0d" Dec 03 00:10:48 crc kubenswrapper[3561]: I1203 00:10:48.443064 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 03 00:10:48 crc kubenswrapper[3561]: I1203 00:10:48.474019 3561 scope.go:117] "RemoveContainer" containerID="cbd23720d2dea398062aa1fba2e24a2065405b2e0a179701878af9d87dc6c355" Dec 03 00:10:48 crc kubenswrapper[3561]: I1203 00:10:48.515960 3561 scope.go:117] "RemoveContainer" containerID="8e64213b0065caa5df076b2c2fef8e20f83de78e5235fbfe6a2138215029aa76" Dec 03 00:10:50 crc kubenswrapper[3561]: I1203 00:10:50.594490 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33f76c53-6f7f-475a-a091-33fe0506eb7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33f76c53-6f7f-475a-a091-33fe0506eb7d" (UID: "33f76c53-6f7f-475a-a091-33fe0506eb7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:50 crc kubenswrapper[3561]: I1203 00:10:50.597765 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33f76c53-6f7f-475a-a091-33fe0506eb7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:50 crc kubenswrapper[3561]: I1203 00:10:50.839746 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fp8v6"] Dec 03 00:10:50 crc kubenswrapper[3561]: I1203 00:10:50.846321 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fp8v6"] Dec 03 00:10:51 crc kubenswrapper[3561]: I1203 00:10:51.337909 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c5b1235-88b1-4e71-b697-04c9f657382e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c5b1235-88b1-4e71-b697-04c9f657382e" (UID: "4c5b1235-88b1-4e71-b697-04c9f657382e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:51 crc kubenswrapper[3561]: I1203 00:10:51.408937 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c5b1235-88b1-4e71-b697-04c9f657382e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:51 crc kubenswrapper[3561]: I1203 00:10:51.647742 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7bqm"] Dec 03 00:10:51 crc kubenswrapper[3561]: I1203 00:10:51.652651 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7bqm"] Dec 03 00:10:51 crc kubenswrapper[3561]: I1203 00:10:51.671604 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33f76c53-6f7f-475a-a091-33fe0506eb7d" path="/var/lib/kubelet/pods/33f76c53-6f7f-475a-a091-33fe0506eb7d/volumes" Dec 03 00:10:51 crc kubenswrapper[3561]: I1203 00:10:51.672196 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c5b1235-88b1-4e71-b697-04c9f657382e" path="/var/lib/kubelet/pods/4c5b1235-88b1-4e71-b697-04c9f657382e/volumes" Dec 03 00:10:55 crc kubenswrapper[3561]: I1203 00:10:55.003736 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4092a9f8-5acc-4932-9e90-ef962eeb301a" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:55 crc kubenswrapper[3561]: I1203 00:10:55.038138 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a653300-bd4c-4c3f-ad33-e102862155b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a653300-bd4c-4c3f-ad33-e102862155b1" (UID: "8a653300-bd4c-4c3f-ad33-e102862155b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:55 crc kubenswrapper[3561]: I1203 00:10:55.080609 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:55 crc kubenswrapper[3561]: I1203 00:10:55.080657 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a653300-bd4c-4c3f-ad33-e102862155b1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:55 crc kubenswrapper[3561]: I1203 00:10:55.174752 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-27fdr"] Dec 03 00:10:55 crc kubenswrapper[3561]: I1203 00:10:55.182715 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-27fdr"] Dec 03 00:10:55 crc kubenswrapper[3561]: I1203 00:10:55.217339 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4jkp"] Dec 03 00:10:55 crc kubenswrapper[3561]: I1203 00:10:55.221699 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f4jkp"] Dec 03 00:10:55 crc kubenswrapper[3561]: I1203 00:10:55.672197 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" path="/var/lib/kubelet/pods/4092a9f8-5acc-4932-9e90-ef962eeb301a/volumes" Dec 03 00:10:55 crc kubenswrapper[3561]: I1203 00:10:55.673488 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a653300-bd4c-4c3f-ad33-e102862155b1" path="/var/lib/kubelet/pods/8a653300-bd4c-4c3f-ad33-e102862155b1/volumes" Dec 03 00:10:56 crc kubenswrapper[3561]: I1203 00:10:56.620618 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:56 crc kubenswrapper[3561]: I1203 00:10:56.705823 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:56 crc kubenswrapper[3561]: I1203 00:10:56.875652 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Dec 03 00:10:56 crc kubenswrapper[3561]: I1203 00:10:56.879900 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.257996 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nqmqd"] Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258179 3561 topology_manager.go:215] "Topology Admit Handler" podUID="bb2ea96b-ff13-4771-b1d0-c04ee7903248" podNamespace="openshift-marketplace" podName="redhat-operators-nqmqd" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.258491 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258666 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.258697 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="4f46bfa4-9000-4c75-9e86-49671ca56ef0" containerName="collect-profiles" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258705 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f46bfa4-9000-4c75-9e86-49671ca56ef0" containerName="collect-profiles" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.258722 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-utilities" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258731 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-utilities" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.258744 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="4c5b1235-88b1-4e71-b697-04c9f657382e" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258751 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5b1235-88b1-4e71-b697-04c9f657382e" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.258761 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-utilities" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258772 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-utilities" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.258789 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="33f76c53-6f7f-475a-a091-33fe0506eb7d" containerName="extract-utilities" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258796 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f76c53-6f7f-475a-a091-33fe0506eb7d" containerName="extract-utilities" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.258805 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258813 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.258826 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="extract-utilities" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258833 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="extract-utilities" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.258845 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8a653300-bd4c-4c3f-ad33-e102862155b1" containerName="extract-utilities" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258851 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a653300-bd4c-4c3f-ad33-e102862155b1" containerName="extract-utilities" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.258864 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8a653300-bd4c-4c3f-ad33-e102862155b1" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258871 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a653300-bd4c-4c3f-ad33-e102862155b1" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.258881 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258889 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.258901 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258909 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.258919 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="registry-server" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258926 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="registry-server" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.258935 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258942 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.258952 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="extract-utilities" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258962 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="extract-utilities" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.258971 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="4c5b1235-88b1-4e71-b697-04c9f657382e" containerName="extract-utilities" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258978 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5b1235-88b1-4e71-b697-04c9f657382e" containerName="extract-utilities" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.258985 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="33f76c53-6f7f-475a-a091-33fe0506eb7d" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.258993 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f76c53-6f7f-475a-a091-33fe0506eb7d" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: E1203 00:10:57.259002 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.259011 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.259390 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.259406 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.259420 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a653300-bd4c-4c3f-ad33-e102862155b1" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.259428 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="registry-server" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.259436 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c5b1235-88b1-4e71-b697-04c9f657382e" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.259450 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f76c53-6f7f-475a-a091-33fe0506eb7d" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.259460 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.259469 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f46bfa4-9000-4c75-9e86-49671ca56ef0" containerName="collect-profiles" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.259477 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-content" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.260624 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqmqd" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.263116 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.272164 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nqmqd"] Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.294992 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f4dca86-e6ee-4ec9-8324-86aff960225e" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.315227 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb2ea96b-ff13-4771-b1d0-c04ee7903248-catalog-content\") pod \"redhat-operators-nqmqd\" (UID: \"bb2ea96b-ff13-4771-b1d0-c04ee7903248\") " pod="openshift-marketplace/redhat-operators-nqmqd" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.315286 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx7xp\" (UniqueName: \"kubernetes.io/projected/bb2ea96b-ff13-4771-b1d0-c04ee7903248-kube-api-access-sx7xp\") pod \"redhat-operators-nqmqd\" (UID: \"bb2ea96b-ff13-4771-b1d0-c04ee7903248\") " pod="openshift-marketplace/redhat-operators-nqmqd" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.315315 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb2ea96b-ff13-4771-b1d0-c04ee7903248-utilities\") pod \"redhat-operators-nqmqd\" (UID: \"bb2ea96b-ff13-4771-b1d0-c04ee7903248\") " pod="openshift-marketplace/redhat-operators-nqmqd" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.315533 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.416909 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb2ea96b-ff13-4771-b1d0-c04ee7903248-catalog-content\") pod \"redhat-operators-nqmqd\" (UID: \"bb2ea96b-ff13-4771-b1d0-c04ee7903248\") " pod="openshift-marketplace/redhat-operators-nqmqd" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.417005 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-sx7xp\" (UniqueName: \"kubernetes.io/projected/bb2ea96b-ff13-4771-b1d0-c04ee7903248-kube-api-access-sx7xp\") pod \"redhat-operators-nqmqd\" (UID: \"bb2ea96b-ff13-4771-b1d0-c04ee7903248\") " pod="openshift-marketplace/redhat-operators-nqmqd" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.417035 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb2ea96b-ff13-4771-b1d0-c04ee7903248-utilities\") pod \"redhat-operators-nqmqd\" (UID: \"bb2ea96b-ff13-4771-b1d0-c04ee7903248\") " pod="openshift-marketplace/redhat-operators-nqmqd" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.417784 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb2ea96b-ff13-4771-b1d0-c04ee7903248-utilities\") pod \"redhat-operators-nqmqd\" (UID: \"bb2ea96b-ff13-4771-b1d0-c04ee7903248\") " pod="openshift-marketplace/redhat-operators-nqmqd" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.417815 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb2ea96b-ff13-4771-b1d0-c04ee7903248-catalog-content\") pod \"redhat-operators-nqmqd\" (UID: \"bb2ea96b-ff13-4771-b1d0-c04ee7903248\") " pod="openshift-marketplace/redhat-operators-nqmqd" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.438764 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx7xp\" (UniqueName: \"kubernetes.io/projected/bb2ea96b-ff13-4771-b1d0-c04ee7903248-kube-api-access-sx7xp\") pod \"redhat-operators-nqmqd\" (UID: \"bb2ea96b-ff13-4771-b1d0-c04ee7903248\") " pod="openshift-marketplace/redhat-operators-nqmqd" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.581001 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqmqd" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.600743 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8jhz6"] Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.604387 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8jhz6"] Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.671721 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" path="/var/lib/kubelet/pods/3f4dca86-e6ee-4ec9-8324-86aff960225e/volumes" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.672647 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" path="/var/lib/kubelet/pods/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760/volumes" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.712342 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "887d596e-c519-4bfa-af90-3edd9e1b2f0f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:57 crc kubenswrapper[3561]: I1203 00:10:57.720461 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:58 crc kubenswrapper[3561]: I1203 00:10:57.999820 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nqmqd"] Dec 03 00:10:58 crc kubenswrapper[3561]: W1203 00:10:58.007636 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb2ea96b_ff13_4771_b1d0_c04ee7903248.slice/crio-4a53d48490b03de0bf398e6461d3e5e59e8df43111fbe0cf3d1e567fc158c35a WatchSource:0}: Error finding container 4a53d48490b03de0bf398e6461d3e5e59e8df43111fbe0cf3d1e567fc158c35a: Status 404 returned error can't find the container with id 4a53d48490b03de0bf398e6461d3e5e59e8df43111fbe0cf3d1e567fc158c35a Dec 03 00:10:58 crc kubenswrapper[3561]: I1203 00:10:58.042303 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7287f"] Dec 03 00:10:58 crc kubenswrapper[3561]: I1203 00:10:58.044900 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7287f"] Dec 03 00:10:58 crc kubenswrapper[3561]: I1203 00:10:58.521751 3561 generic.go:334] "Generic (PLEG): container finished" podID="bb2ea96b-ff13-4771-b1d0-c04ee7903248" containerID="f017a6cd37698d55a38af29af6ddc3ffd9b1b75b420abb88339bac9396994e5f" exitCode=0 Dec 03 00:10:58 crc kubenswrapper[3561]: I1203 00:10:58.521838 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqmqd" event={"ID":"bb2ea96b-ff13-4771-b1d0-c04ee7903248","Type":"ContainerDied","Data":"f017a6cd37698d55a38af29af6ddc3ffd9b1b75b420abb88339bac9396994e5f"} Dec 03 00:10:58 crc kubenswrapper[3561]: I1203 00:10:58.522191 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqmqd" event={"ID":"bb2ea96b-ff13-4771-b1d0-c04ee7903248","Type":"ContainerStarted","Data":"4a53d48490b03de0bf398e6461d3e5e59e8df43111fbe0cf3d1e567fc158c35a"} Dec 03 00:10:58 crc kubenswrapper[3561]: I1203 00:10:58.854776 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pqpdr"] Dec 03 00:10:58 crc kubenswrapper[3561]: I1203 00:10:58.854897 3561 topology_manager.go:215] "Topology Admit Handler" podUID="d4f7dbd8-6337-441a-8572-7eb95a3cb2b4" podNamespace="openshift-marketplace" podName="community-operators-pqpdr" Dec 03 00:10:58 crc kubenswrapper[3561]: I1203 00:10:58.855920 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqpdr" Dec 03 00:10:58 crc kubenswrapper[3561]: I1203 00:10:58.868318 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Dec 03 00:10:58 crc kubenswrapper[3561]: I1203 00:10:58.884821 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pqpdr"] Dec 03 00:10:58 crc kubenswrapper[3561]: I1203 00:10:58.935108 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btzgg\" (UniqueName: \"kubernetes.io/projected/d4f7dbd8-6337-441a-8572-7eb95a3cb2b4-kube-api-access-btzgg\") pod \"community-operators-pqpdr\" (UID: \"d4f7dbd8-6337-441a-8572-7eb95a3cb2b4\") " pod="openshift-marketplace/community-operators-pqpdr" Dec 03 00:10:58 crc kubenswrapper[3561]: I1203 00:10:58.935218 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f7dbd8-6337-441a-8572-7eb95a3cb2b4-catalog-content\") pod \"community-operators-pqpdr\" (UID: \"d4f7dbd8-6337-441a-8572-7eb95a3cb2b4\") " pod="openshift-marketplace/community-operators-pqpdr" Dec 03 00:10:58 crc kubenswrapper[3561]: I1203 00:10:58.935494 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f7dbd8-6337-441a-8572-7eb95a3cb2b4-utilities\") pod \"community-operators-pqpdr\" (UID: \"d4f7dbd8-6337-441a-8572-7eb95a3cb2b4\") " pod="openshift-marketplace/community-operators-pqpdr" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.036723 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f7dbd8-6337-441a-8572-7eb95a3cb2b4-catalog-content\") pod \"community-operators-pqpdr\" (UID: \"d4f7dbd8-6337-441a-8572-7eb95a3cb2b4\") " pod="openshift-marketplace/community-operators-pqpdr" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.036797 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f7dbd8-6337-441a-8572-7eb95a3cb2b4-utilities\") pod \"community-operators-pqpdr\" (UID: \"d4f7dbd8-6337-441a-8572-7eb95a3cb2b4\") " pod="openshift-marketplace/community-operators-pqpdr" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.037445 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f7dbd8-6337-441a-8572-7eb95a3cb2b4-utilities\") pod \"community-operators-pqpdr\" (UID: \"d4f7dbd8-6337-441a-8572-7eb95a3cb2b4\") " pod="openshift-marketplace/community-operators-pqpdr" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.037495 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-btzgg\" (UniqueName: \"kubernetes.io/projected/d4f7dbd8-6337-441a-8572-7eb95a3cb2b4-kube-api-access-btzgg\") pod \"community-operators-pqpdr\" (UID: \"d4f7dbd8-6337-441a-8572-7eb95a3cb2b4\") " pod="openshift-marketplace/community-operators-pqpdr" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.037708 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f7dbd8-6337-441a-8572-7eb95a3cb2b4-catalog-content\") pod \"community-operators-pqpdr\" (UID: \"d4f7dbd8-6337-441a-8572-7eb95a3cb2b4\") " pod="openshift-marketplace/community-operators-pqpdr" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.060436 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-btzgg\" (UniqueName: \"kubernetes.io/projected/d4f7dbd8-6337-441a-8572-7eb95a3cb2b4-kube-api-access-btzgg\") pod \"community-operators-pqpdr\" (UID: \"d4f7dbd8-6337-441a-8572-7eb95a3cb2b4\") " pod="openshift-marketplace/community-operators-pqpdr" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.183174 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqpdr" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.266791 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lms5f"] Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.266908 3561 topology_manager.go:215] "Topology Admit Handler" podUID="6fa8e8db-f102-4f0c-9086-da639d8f90e2" podNamespace="openshift-marketplace" podName="community-operators-lms5f" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.267971 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lms5f" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.281852 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lms5f"] Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.341693 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsfsl\" (UniqueName: \"kubernetes.io/projected/6fa8e8db-f102-4f0c-9086-da639d8f90e2-kube-api-access-tsfsl\") pod \"community-operators-lms5f\" (UID: \"6fa8e8db-f102-4f0c-9086-da639d8f90e2\") " pod="openshift-marketplace/community-operators-lms5f" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.341936 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fa8e8db-f102-4f0c-9086-da639d8f90e2-utilities\") pod \"community-operators-lms5f\" (UID: \"6fa8e8db-f102-4f0c-9086-da639d8f90e2\") " pod="openshift-marketplace/community-operators-lms5f" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.342068 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fa8e8db-f102-4f0c-9086-da639d8f90e2-catalog-content\") pod \"community-operators-lms5f\" (UID: \"6fa8e8db-f102-4f0c-9086-da639d8f90e2\") " pod="openshift-marketplace/community-operators-lms5f" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.391976 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pqpdr"] Dec 03 00:10:59 crc kubenswrapper[3561]: W1203 00:10:59.405873 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4f7dbd8_6337_441a_8572_7eb95a3cb2b4.slice/crio-00e96fb343ea72a417adaf401d4d2615ca01c4ff5f0711b17155880451435673 WatchSource:0}: Error finding container 00e96fb343ea72a417adaf401d4d2615ca01c4ff5f0711b17155880451435673: Status 404 returned error can't find the container with id 00e96fb343ea72a417adaf401d4d2615ca01c4ff5f0711b17155880451435673 Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.443257 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fa8e8db-f102-4f0c-9086-da639d8f90e2-utilities\") pod \"community-operators-lms5f\" (UID: \"6fa8e8db-f102-4f0c-9086-da639d8f90e2\") " pod="openshift-marketplace/community-operators-lms5f" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.443573 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fa8e8db-f102-4f0c-9086-da639d8f90e2-catalog-content\") pod \"community-operators-lms5f\" (UID: \"6fa8e8db-f102-4f0c-9086-da639d8f90e2\") " pod="openshift-marketplace/community-operators-lms5f" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.443720 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tsfsl\" (UniqueName: \"kubernetes.io/projected/6fa8e8db-f102-4f0c-9086-da639d8f90e2-kube-api-access-tsfsl\") pod \"community-operators-lms5f\" (UID: \"6fa8e8db-f102-4f0c-9086-da639d8f90e2\") " pod="openshift-marketplace/community-operators-lms5f" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.444034 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fa8e8db-f102-4f0c-9086-da639d8f90e2-utilities\") pod \"community-operators-lms5f\" (UID: \"6fa8e8db-f102-4f0c-9086-da639d8f90e2\") " pod="openshift-marketplace/community-operators-lms5f" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.444127 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fa8e8db-f102-4f0c-9086-da639d8f90e2-catalog-content\") pod \"community-operators-lms5f\" (UID: \"6fa8e8db-f102-4f0c-9086-da639d8f90e2\") " pod="openshift-marketplace/community-operators-lms5f" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.470176 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsfsl\" (UniqueName: \"kubernetes.io/projected/6fa8e8db-f102-4f0c-9086-da639d8f90e2-kube-api-access-tsfsl\") pod \"community-operators-lms5f\" (UID: \"6fa8e8db-f102-4f0c-9086-da639d8f90e2\") " pod="openshift-marketplace/community-operators-lms5f" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.528358 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqmqd" event={"ID":"bb2ea96b-ff13-4771-b1d0-c04ee7903248","Type":"ContainerStarted","Data":"328820e1d95395cb619a34ff07548cad29d64a23bfda4bdd3144a37e509bb6e3"} Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.529454 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqpdr" event={"ID":"d4f7dbd8-6337-441a-8572-7eb95a3cb2b4","Type":"ContainerStarted","Data":"00e96fb343ea72a417adaf401d4d2615ca01c4ff5f0711b17155880451435673"} Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.595972 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lms5f" Dec 03 00:10:59 crc kubenswrapper[3561]: I1203 00:10:59.672405 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" path="/var/lib/kubelet/pods/887d596e-c519-4bfa-af90-3edd9e1b2f0f/volumes" Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.006721 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lms5f"] Dec 03 00:11:00 crc kubenswrapper[3561]: W1203 00:11:00.012898 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fa8e8db_f102_4f0c_9086_da639d8f90e2.slice/crio-2230617edbaa829a4a6acfdbbc03b9dc31f70a2440e5508357f98f71721fc834 WatchSource:0}: Error finding container 2230617edbaa829a4a6acfdbbc03b9dc31f70a2440e5508357f98f71721fc834: Status 404 returned error can't find the container with id 2230617edbaa829a4a6acfdbbc03b9dc31f70a2440e5508357f98f71721fc834 Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.283682 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p5z9s"] Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.283936 3561 topology_manager.go:215] "Topology Admit Handler" podUID="436d7366-bd91-4ff3-be8f-88da5d161203" podNamespace="openshift-marketplace" podName="certified-operators-p5z9s" Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.286793 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p5z9s" Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.296077 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.296471 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p5z9s"] Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.361350 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/436d7366-bd91-4ff3-be8f-88da5d161203-utilities\") pod \"certified-operators-p5z9s\" (UID: \"436d7366-bd91-4ff3-be8f-88da5d161203\") " pod="openshift-marketplace/certified-operators-p5z9s" Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.361667 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rljq\" (UniqueName: \"kubernetes.io/projected/436d7366-bd91-4ff3-be8f-88da5d161203-kube-api-access-4rljq\") pod \"certified-operators-p5z9s\" (UID: \"436d7366-bd91-4ff3-be8f-88da5d161203\") " pod="openshift-marketplace/certified-operators-p5z9s" Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.361789 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/436d7366-bd91-4ff3-be8f-88da5d161203-catalog-content\") pod \"certified-operators-p5z9s\" (UID: \"436d7366-bd91-4ff3-be8f-88da5d161203\") " pod="openshift-marketplace/certified-operators-p5z9s" Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.463512 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4rljq\" (UniqueName: \"kubernetes.io/projected/436d7366-bd91-4ff3-be8f-88da5d161203-kube-api-access-4rljq\") pod \"certified-operators-p5z9s\" (UID: \"436d7366-bd91-4ff3-be8f-88da5d161203\") " pod="openshift-marketplace/certified-operators-p5z9s" Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.463680 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/436d7366-bd91-4ff3-be8f-88da5d161203-catalog-content\") pod \"certified-operators-p5z9s\" (UID: \"436d7366-bd91-4ff3-be8f-88da5d161203\") " pod="openshift-marketplace/certified-operators-p5z9s" Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.463754 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/436d7366-bd91-4ff3-be8f-88da5d161203-utilities\") pod \"certified-operators-p5z9s\" (UID: \"436d7366-bd91-4ff3-be8f-88da5d161203\") " pod="openshift-marketplace/certified-operators-p5z9s" Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.464461 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/436d7366-bd91-4ff3-be8f-88da5d161203-catalog-content\") pod \"certified-operators-p5z9s\" (UID: \"436d7366-bd91-4ff3-be8f-88da5d161203\") " pod="openshift-marketplace/certified-operators-p5z9s" Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.464531 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/436d7366-bd91-4ff3-be8f-88da5d161203-utilities\") pod \"certified-operators-p5z9s\" (UID: \"436d7366-bd91-4ff3-be8f-88da5d161203\") " pod="openshift-marketplace/certified-operators-p5z9s" Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.483276 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rljq\" (UniqueName: \"kubernetes.io/projected/436d7366-bd91-4ff3-be8f-88da5d161203-kube-api-access-4rljq\") pod \"certified-operators-p5z9s\" (UID: \"436d7366-bd91-4ff3-be8f-88da5d161203\") " pod="openshift-marketplace/certified-operators-p5z9s" Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.536262 3561 generic.go:334] "Generic (PLEG): container finished" podID="d4f7dbd8-6337-441a-8572-7eb95a3cb2b4" containerID="a0b37a0bc67f475658d7c9b110240f5901d01e37cc94c892f0d929798024c39e" exitCode=0 Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.536342 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqpdr" event={"ID":"d4f7dbd8-6337-441a-8572-7eb95a3cb2b4","Type":"ContainerDied","Data":"a0b37a0bc67f475658d7c9b110240f5901d01e37cc94c892f0d929798024c39e"} Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.538007 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lms5f" event={"ID":"6fa8e8db-f102-4f0c-9086-da639d8f90e2","Type":"ContainerStarted","Data":"2230617edbaa829a4a6acfdbbc03b9dc31f70a2440e5508357f98f71721fc834"} Dec 03 00:11:00 crc kubenswrapper[3561]: I1203 00:11:00.633991 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p5z9s" Dec 03 00:11:01 crc kubenswrapper[3561]: I1203 00:11:01.054317 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p5z9s"] Dec 03 00:11:01 crc kubenswrapper[3561]: W1203 00:11:01.062693 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod436d7366_bd91_4ff3_be8f_88da5d161203.slice/crio-e95365bb88bb6033011329c9a9b8d4cf0f4d7900f9e3e56be7010ecaad130061 WatchSource:0}: Error finding container e95365bb88bb6033011329c9a9b8d4cf0f4d7900f9e3e56be7010ecaad130061: Status 404 returned error can't find the container with id e95365bb88bb6033011329c9a9b8d4cf0f4d7900f9e3e56be7010ecaad130061 Dec 03 00:11:01 crc kubenswrapper[3561]: I1203 00:11:01.553843 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5z9s" event={"ID":"436d7366-bd91-4ff3-be8f-88da5d161203","Type":"ContainerStarted","Data":"e95365bb88bb6033011329c9a9b8d4cf0f4d7900f9e3e56be7010ecaad130061"} Dec 03 00:11:02 crc kubenswrapper[3561]: I1203 00:11:02.559401 3561 generic.go:334] "Generic (PLEG): container finished" podID="6fa8e8db-f102-4f0c-9086-da639d8f90e2" containerID="dbab4bb0d1022385bed936fc69706bdbe24b3b8d8f7e5de1c906b4ad0ad9a320" exitCode=0 Dec 03 00:11:02 crc kubenswrapper[3561]: I1203 00:11:02.559464 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lms5f" event={"ID":"6fa8e8db-f102-4f0c-9086-da639d8f90e2","Type":"ContainerDied","Data":"dbab4bb0d1022385bed936fc69706bdbe24b3b8d8f7e5de1c906b4ad0ad9a320"} Dec 03 00:11:02 crc kubenswrapper[3561]: I1203 00:11:02.562253 3561 generic.go:334] "Generic (PLEG): container finished" podID="436d7366-bd91-4ff3-be8f-88da5d161203" containerID="1ec0e7c845e57badd82ae4c25f762742a22d5a16d5f22a27008ca73df98e5b67" exitCode=0 Dec 03 00:11:02 crc kubenswrapper[3561]: I1203 00:11:02.562288 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5z9s" event={"ID":"436d7366-bd91-4ff3-be8f-88da5d161203","Type":"ContainerDied","Data":"1ec0e7c845e57badd82ae4c25f762742a22d5a16d5f22a27008ca73df98e5b67"} Dec 03 00:11:03 crc kubenswrapper[3561]: I1203 00:11:03.568576 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqpdr" event={"ID":"d4f7dbd8-6337-441a-8572-7eb95a3cb2b4","Type":"ContainerStarted","Data":"97da231e248522935659d60fde78665a782e742d2598dc8f19ec61f8be62c4f5"} Dec 03 00:11:03 crc kubenswrapper[3561]: I1203 00:11:03.572853 3561 generic.go:334] "Generic (PLEG): container finished" podID="bb2ea96b-ff13-4771-b1d0-c04ee7903248" containerID="328820e1d95395cb619a34ff07548cad29d64a23bfda4bdd3144a37e509bb6e3" exitCode=0 Dec 03 00:11:03 crc kubenswrapper[3561]: I1203 00:11:03.572938 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqmqd" event={"ID":"bb2ea96b-ff13-4771-b1d0-c04ee7903248","Type":"ContainerDied","Data":"328820e1d95395cb619a34ff07548cad29d64a23bfda4bdd3144a37e509bb6e3"} Dec 03 00:11:04 crc kubenswrapper[3561]: I1203 00:11:04.578677 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5z9s" event={"ID":"436d7366-bd91-4ff3-be8f-88da5d161203","Type":"ContainerStarted","Data":"f9ba51bba89687189f125e24a8aeaeb973dc109e51a7ee23587a1336dc01ecdc"} Dec 03 00:11:05 crc kubenswrapper[3561]: I1203 00:11:05.585374 3561 generic.go:334] "Generic (PLEG): container finished" podID="436d7366-bd91-4ff3-be8f-88da5d161203" containerID="f9ba51bba89687189f125e24a8aeaeb973dc109e51a7ee23587a1336dc01ecdc" exitCode=0 Dec 03 00:11:05 crc kubenswrapper[3561]: I1203 00:11:05.585468 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5z9s" event={"ID":"436d7366-bd91-4ff3-be8f-88da5d161203","Type":"ContainerDied","Data":"f9ba51bba89687189f125e24a8aeaeb973dc109e51a7ee23587a1336dc01ecdc"} Dec 03 00:11:07 crc kubenswrapper[3561]: I1203 00:11:07.598226 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lms5f" event={"ID":"6fa8e8db-f102-4f0c-9086-da639d8f90e2","Type":"ContainerStarted","Data":"2b3a566ca8d6f32bfbf59e7118ee736e95d7b02b1dd756a05022ae46a402f954"} Dec 03 00:11:08 crc kubenswrapper[3561]: I1203 00:11:08.605577 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5z9s" event={"ID":"436d7366-bd91-4ff3-be8f-88da5d161203","Type":"ContainerStarted","Data":"9e5a970d5f886d234c31cb6633ebe014622ffec0ea86d11bc4fc17a39cbb5c7b"} Dec 03 00:11:08 crc kubenswrapper[3561]: I1203 00:11:08.607236 3561 generic.go:334] "Generic (PLEG): container finished" podID="d4f7dbd8-6337-441a-8572-7eb95a3cb2b4" containerID="97da231e248522935659d60fde78665a782e742d2598dc8f19ec61f8be62c4f5" exitCode=0 Dec 03 00:11:08 crc kubenswrapper[3561]: I1203 00:11:08.607273 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqpdr" event={"ID":"d4f7dbd8-6337-441a-8572-7eb95a3cb2b4","Type":"ContainerDied","Data":"97da231e248522935659d60fde78665a782e742d2598dc8f19ec61f8be62c4f5"} Dec 03 00:11:08 crc kubenswrapper[3561]: I1203 00:11:08.610706 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqmqd" event={"ID":"bb2ea96b-ff13-4771-b1d0-c04ee7903248","Type":"ContainerStarted","Data":"6e3e83ada498bf16f0f688232eec0b2d748acc87d7c10f5ce3b8b6aa58d3b2a6"} Dec 03 00:11:09 crc kubenswrapper[3561]: I1203 00:11:09.665614 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nqmqd" podStartSLOduration=7.297263519 podStartE2EDuration="12.66555717s" podCreationTimestamp="2025-12-03 00:10:57 +0000 UTC" firstStartedPulling="2025-12-03 00:10:58.524152498 +0000 UTC m=+257.304586756" lastFinishedPulling="2025-12-03 00:11:03.892446129 +0000 UTC m=+262.672880407" observedRunningTime="2025-12-03 00:11:09.661621206 +0000 UTC m=+268.442055464" watchObservedRunningTime="2025-12-03 00:11:09.66555717 +0000 UTC m=+268.445991448" Dec 03 00:11:10 crc kubenswrapper[3561]: I1203 00:11:10.623289 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8s8pc_c782cf62-a827-4677-b3c2-6f82c5f09cbb/registry-server/1.log" Dec 03 00:11:10 crc kubenswrapper[3561]: I1203 00:11:10.624168 3561 generic.go:334] "Generic (PLEG): container finished" podID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerID="b0bc633e63b404e585c1ec38598cf53a0d07c97c53029d331bfd44596af69f7f" exitCode=137 Dec 03 00:11:10 crc kubenswrapper[3561]: I1203 00:11:10.624224 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"b0bc633e63b404e585c1ec38598cf53a0d07c97c53029d331bfd44596af69f7f"} Dec 03 00:11:10 crc kubenswrapper[3561]: I1203 00:11:10.634745 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p5z9s" Dec 03 00:11:10 crc kubenswrapper[3561]: I1203 00:11:10.634776 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p5z9s" Dec 03 00:11:10 crc kubenswrapper[3561]: I1203 00:11:10.648346 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p5z9s" podStartSLOduration=7.355523206 podStartE2EDuration="10.648306573s" podCreationTimestamp="2025-12-03 00:11:00 +0000 UTC" firstStartedPulling="2025-12-03 00:11:02.563851605 +0000 UTC m=+261.344285863" lastFinishedPulling="2025-12-03 00:11:05.856634972 +0000 UTC m=+264.637069230" observedRunningTime="2025-12-03 00:11:10.646725213 +0000 UTC m=+269.427159521" watchObservedRunningTime="2025-12-03 00:11:10.648306573 +0000 UTC m=+269.428740831" Dec 03 00:11:11 crc kubenswrapper[3561]: I1203 00:11:11.255372 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:11:12 crc kubenswrapper[3561]: I1203 00:11:12.280823 3561 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 03 00:11:12 crc kubenswrapper[3561]: I1203 00:11:12.280928 3561 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ea5f9a7192af1960ec8c50a86fd2d9a756dbf85695798868f611e04a03ec009/globalmount\"" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:11:12 crc kubenswrapper[3561]: I1203 00:11:12.341088 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:11:12 crc kubenswrapper[3561]: I1203 00:11:12.571644 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Dec 03 00:11:12 crc kubenswrapper[3561]: I1203 00:11:12.576505 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:11:12 crc kubenswrapper[3561]: I1203 00:11:12.662067 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqpdr" event={"ID":"d4f7dbd8-6337-441a-8572-7eb95a3cb2b4","Type":"ContainerStarted","Data":"78ce2b8b24ff6b570a298118fd2490e51abac7c7aef99d3fd9bb38d808244f22"} Dec 03 00:11:12 crc kubenswrapper[3561]: I1203 00:11:12.786710 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p5z9s" Dec 03 00:11:13 crc kubenswrapper[3561]: I1203 00:11:13.669858 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerStarted","Data":"4e38c0efffd2f4b7d6016e17c90aa1b8c0441ce4ee182704bc39c3f7e1481e75"} Dec 03 00:11:13 crc kubenswrapper[3561]: I1203 00:11:13.947062 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8s8pc_c782cf62-a827-4677-b3c2-6f82c5f09cbb/registry-server/1.log" Dec 03 00:11:13 crc kubenswrapper[3561]: I1203 00:11:13.948033 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:11:13 crc kubenswrapper[3561]: I1203 00:11:13.972037 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pqpdr" podStartSLOduration=7.612425592 podStartE2EDuration="15.971985786s" podCreationTimestamp="2025-12-03 00:10:58 +0000 UTC" firstStartedPulling="2025-12-03 00:11:00.538378563 +0000 UTC m=+259.318812821" lastFinishedPulling="2025-12-03 00:11:08.897938757 +0000 UTC m=+267.678373015" observedRunningTime="2025-12-03 00:11:13.802296289 +0000 UTC m=+272.582730547" watchObservedRunningTime="2025-12-03 00:11:13.971985786 +0000 UTC m=+272.752420074" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.100174 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") pod \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.100308 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") pod \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.100404 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.102135 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities" (OuterVolumeSpecName: "utilities") pod "c782cf62-a827-4677-b3c2-6f82c5f09cbb" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.107273 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r" (OuterVolumeSpecName: "kube-api-access-tf29r") pod "c782cf62-a827-4677-b3c2-6f82c5f09cbb" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb"). InnerVolumeSpecName "kube-api-access-tf29r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.201632 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.201916 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.271668 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c782cf62-a827-4677-b3c2-6f82c5f09cbb" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.303316 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.677413 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8s8pc_c782cf62-a827-4677-b3c2-6f82c5f09cbb/registry-server/1.log" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.680405 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"3a1edd4972a45fe01708b260f07cf9f72a028c47c708c7ebf8426b6dd4c91424"} Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.680465 3561 scope.go:117] "RemoveContainer" containerID="b0bc633e63b404e585c1ec38598cf53a0d07c97c53029d331bfd44596af69f7f" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.680679 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.743738 3561 scope.go:117] "RemoveContainer" containerID="4fbbdf924a3627bed34fd7e11a72c19bb1ad9a6ed5a837fcf28bfd7e039e9582" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.746901 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8s8pc"] Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.752090 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8s8pc"] Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.796312 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d6mlm"] Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.796456 3561 topology_manager.go:215] "Topology Admit Handler" podUID="3b9c24fe-561f-4c69-b91e-ae8796e4d78f" podNamespace="openshift-marketplace" podName="redhat-marketplace-d6mlm" Dec 03 00:11:14 crc kubenswrapper[3561]: E1203 00:11:14.796683 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.796708 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" Dec 03 00:11:14 crc kubenswrapper[3561]: E1203 00:11:14.796725 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="extract-utilities" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.796735 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="extract-utilities" Dec 03 00:11:14 crc kubenswrapper[3561]: E1203 00:11:14.796753 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="extract-content" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.796762 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="extract-content" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.797062 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.798247 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6mlm" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.800055 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6mlm"] Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.805119 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.912705 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rshmw\" (UniqueName: \"kubernetes.io/projected/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-kube-api-access-rshmw\") pod \"redhat-marketplace-d6mlm\" (UID: \"3b9c24fe-561f-4c69-b91e-ae8796e4d78f\") " pod="openshift-marketplace/redhat-marketplace-d6mlm" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.912852 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-catalog-content\") pod \"redhat-marketplace-d6mlm\" (UID: \"3b9c24fe-561f-4c69-b91e-ae8796e4d78f\") " pod="openshift-marketplace/redhat-marketplace-d6mlm" Dec 03 00:11:14 crc kubenswrapper[3561]: I1203 00:11:14.912938 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-utilities\") pod \"redhat-marketplace-d6mlm\" (UID: \"3b9c24fe-561f-4c69-b91e-ae8796e4d78f\") " pod="openshift-marketplace/redhat-marketplace-d6mlm" Dec 03 00:11:15 crc kubenswrapper[3561]: I1203 00:11:15.015151 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-utilities\") pod \"redhat-marketplace-d6mlm\" (UID: \"3b9c24fe-561f-4c69-b91e-ae8796e4d78f\") " pod="openshift-marketplace/redhat-marketplace-d6mlm" Dec 03 00:11:15 crc kubenswrapper[3561]: I1203 00:11:15.015270 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rshmw\" (UniqueName: \"kubernetes.io/projected/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-kube-api-access-rshmw\") pod \"redhat-marketplace-d6mlm\" (UID: \"3b9c24fe-561f-4c69-b91e-ae8796e4d78f\") " pod="openshift-marketplace/redhat-marketplace-d6mlm" Dec 03 00:11:15 crc kubenswrapper[3561]: I1203 00:11:15.015347 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-catalog-content\") pod \"redhat-marketplace-d6mlm\" (UID: \"3b9c24fe-561f-4c69-b91e-ae8796e4d78f\") " pod="openshift-marketplace/redhat-marketplace-d6mlm" Dec 03 00:11:15 crc kubenswrapper[3561]: I1203 00:11:15.016247 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-catalog-content\") pod \"redhat-marketplace-d6mlm\" (UID: \"3b9c24fe-561f-4c69-b91e-ae8796e4d78f\") " pod="openshift-marketplace/redhat-marketplace-d6mlm" Dec 03 00:11:15 crc kubenswrapper[3561]: I1203 00:11:15.016640 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-utilities\") pod \"redhat-marketplace-d6mlm\" (UID: \"3b9c24fe-561f-4c69-b91e-ae8796e4d78f\") " pod="openshift-marketplace/redhat-marketplace-d6mlm" Dec 03 00:11:15 crc kubenswrapper[3561]: I1203 00:11:15.047994 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rshmw\" (UniqueName: \"kubernetes.io/projected/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-kube-api-access-rshmw\") pod \"redhat-marketplace-d6mlm\" (UID: \"3b9c24fe-561f-4c69-b91e-ae8796e4d78f\") " pod="openshift-marketplace/redhat-marketplace-d6mlm" Dec 03 00:11:15 crc kubenswrapper[3561]: I1203 00:11:15.121444 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6mlm" Dec 03 00:11:15 crc kubenswrapper[3561]: I1203 00:11:15.437031 3561 scope.go:117] "RemoveContainer" containerID="8c2c125de5ee1786510cccaf4b10a48a62a125b7012a001c74f3d6c43a7c221e" Dec 03 00:11:15 crc kubenswrapper[3561]: I1203 00:11:15.670803 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" path="/var/lib/kubelet/pods/c782cf62-a827-4677-b3c2-6f82c5f09cbb/volumes" Dec 03 00:11:15 crc kubenswrapper[3561]: I1203 00:11:15.690479 3561 generic.go:334] "Generic (PLEG): container finished" podID="6fa8e8db-f102-4f0c-9086-da639d8f90e2" containerID="2b3a566ca8d6f32bfbf59e7118ee736e95d7b02b1dd756a05022ae46a402f954" exitCode=0 Dec 03 00:11:15 crc kubenswrapper[3561]: I1203 00:11:15.690628 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lms5f" event={"ID":"6fa8e8db-f102-4f0c-9086-da639d8f90e2","Type":"ContainerDied","Data":"2b3a566ca8d6f32bfbf59e7118ee736e95d7b02b1dd756a05022ae46a402f954"} Dec 03 00:11:16 crc kubenswrapper[3561]: I1203 00:11:16.032370 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6mlm"] Dec 03 00:11:16 crc kubenswrapper[3561]: I1203 00:11:16.696577 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6mlm" event={"ID":"3b9c24fe-561f-4c69-b91e-ae8796e4d78f","Type":"ContainerStarted","Data":"d3c007dd9147923a7ab95b2c282d46cbad8b4a0b6e843766f5db99954f3b0086"} Dec 03 00:11:17 crc kubenswrapper[3561]: I1203 00:11:17.581853 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nqmqd" Dec 03 00:11:17 crc kubenswrapper[3561]: I1203 00:11:17.582770 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nqmqd" Dec 03 00:11:17 crc kubenswrapper[3561]: I1203 00:11:17.680645 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nqmqd" Dec 03 00:11:17 crc kubenswrapper[3561]: I1203 00:11:17.708565 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/4.log" Dec 03 00:11:17 crc kubenswrapper[3561]: I1203 00:11:17.708665 3561 generic.go:334] "Generic (PLEG): container finished" podID="7d51f445-054a-4e4f-a67b-a828f5a32511" containerID="f7be272426a1c83f5742461c42dda42158d951a772202208799d00f0e04b431f" exitCode=1 Dec 03 00:11:17 crc kubenswrapper[3561]: I1203 00:11:17.708762 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerDied","Data":"f7be272426a1c83f5742461c42dda42158d951a772202208799d00f0e04b431f"} Dec 03 00:11:17 crc kubenswrapper[3561]: I1203 00:11:17.710081 3561 scope.go:117] "RemoveContainer" containerID="f7be272426a1c83f5742461c42dda42158d951a772202208799d00f0e04b431f" Dec 03 00:11:17 crc kubenswrapper[3561]: I1203 00:11:17.714177 3561 generic.go:334] "Generic (PLEG): container finished" podID="3b9c24fe-561f-4c69-b91e-ae8796e4d78f" containerID="70df6f2f7c0af65545adb8ec113d0aba4ac511a80288260f56efd131628a8f96" exitCode=0 Dec 03 00:11:17 crc kubenswrapper[3561]: I1203 00:11:17.714421 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6mlm" event={"ID":"3b9c24fe-561f-4c69-b91e-ae8796e4d78f","Type":"ContainerDied","Data":"70df6f2f7c0af65545adb8ec113d0aba4ac511a80288260f56efd131628a8f96"} Dec 03 00:11:17 crc kubenswrapper[3561]: I1203 00:11:17.743558 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lms5f" event={"ID":"6fa8e8db-f102-4f0c-9086-da639d8f90e2","Type":"ContainerStarted","Data":"18440e1bc38ef06ee542d7d3c4ecfaa99c398540478036aa61d2ac6c775cdaf4"} Dec 03 00:11:17 crc kubenswrapper[3561]: I1203 00:11:17.748882 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerStarted","Data":"b83cd103236d01d38fc2aa5c593883b30fde4f3fc27c3f90b045d52f47698a34"} Dec 03 00:11:17 crc kubenswrapper[3561]: I1203 00:11:17.749109 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:11:17 crc kubenswrapper[3561]: I1203 00:11:17.817488 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lms5f" podStartSLOduration=6.179822879 podStartE2EDuration="18.817443448s" podCreationTimestamp="2025-12-03 00:10:59 +0000 UTC" firstStartedPulling="2025-12-03 00:11:03.574236601 +0000 UTC m=+262.354670849" lastFinishedPulling="2025-12-03 00:11:16.21185713 +0000 UTC m=+274.992291418" observedRunningTime="2025-12-03 00:11:17.810531892 +0000 UTC m=+276.590966170" watchObservedRunningTime="2025-12-03 00:11:17.817443448 +0000 UTC m=+276.597877716" Dec 03 00:11:17 crc kubenswrapper[3561]: I1203 00:11:17.873772 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nqmqd" Dec 03 00:11:19 crc kubenswrapper[3561]: I1203 00:11:19.184106 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pqpdr" Dec 03 00:11:19 crc kubenswrapper[3561]: I1203 00:11:19.185423 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pqpdr" Dec 03 00:11:19 crc kubenswrapper[3561]: I1203 00:11:19.279174 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pqpdr" Dec 03 00:11:19 crc kubenswrapper[3561]: I1203 00:11:19.596147 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lms5f" Dec 03 00:11:19 crc kubenswrapper[3561]: I1203 00:11:19.596734 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lms5f" Dec 03 00:11:19 crc kubenswrapper[3561]: I1203 00:11:19.760695 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/4.log" Dec 03 00:11:19 crc kubenswrapper[3561]: I1203 00:11:19.760806 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"1fd363d7015086e67b59b489a3d7a15c3b51316ea5b9e77e546bf6dbe1857dd5"} Dec 03 00:11:19 crc kubenswrapper[3561]: I1203 00:11:19.762237 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6mlm" event={"ID":"3b9c24fe-561f-4c69-b91e-ae8796e4d78f","Type":"ContainerStarted","Data":"7712a159e821c900c783c774691cf65a1e4bbed7cac5eb4227d626d9cb4d6b4a"} Dec 03 00:11:19 crc kubenswrapper[3561]: I1203 00:11:19.927497 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pqpdr" Dec 03 00:11:20 crc kubenswrapper[3561]: I1203 00:11:20.791287 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-lms5f" podUID="6fa8e8db-f102-4f0c-9086-da639d8f90e2" containerName="registry-server" probeResult="failure" output=< Dec 03 00:11:20 crc kubenswrapper[3561]: timeout: failed to connect service ":50051" within 1s Dec 03 00:11:20 crc kubenswrapper[3561]: > Dec 03 00:11:21 crc kubenswrapper[3561]: I1203 00:11:21.138178 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p5z9s" Dec 03 00:11:21 crc kubenswrapper[3561]: I1203 00:11:21.776073 3561 generic.go:334] "Generic (PLEG): container finished" podID="3b9c24fe-561f-4c69-b91e-ae8796e4d78f" containerID="7712a159e821c900c783c774691cf65a1e4bbed7cac5eb4227d626d9cb4d6b4a" exitCode=0 Dec 03 00:11:21 crc kubenswrapper[3561]: I1203 00:11:21.776333 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6mlm" event={"ID":"3b9c24fe-561f-4c69-b91e-ae8796e4d78f","Type":"ContainerDied","Data":"7712a159e821c900c783c774691cf65a1e4bbed7cac5eb4227d626d9cb4d6b4a"} Dec 03 00:11:22 crc kubenswrapper[3561]: I1203 00:11:22.786522 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6mlm" event={"ID":"3b9c24fe-561f-4c69-b91e-ae8796e4d78f","Type":"ContainerStarted","Data":"a3ab76beeb1ca5cb44e7de532798cf17f01cfd0f2faef2c63569c0e3f68155b9"} Dec 03 00:11:23 crc kubenswrapper[3561]: I1203 00:11:23.811318 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d6mlm" podStartSLOduration=5.240127656 podStartE2EDuration="9.811275767s" podCreationTimestamp="2025-12-03 00:11:14 +0000 UTC" firstStartedPulling="2025-12-03 00:11:17.717026692 +0000 UTC m=+276.497460950" lastFinishedPulling="2025-12-03 00:11:22.288174803 +0000 UTC m=+281.068609061" observedRunningTime="2025-12-03 00:11:23.808268173 +0000 UTC m=+282.588702431" watchObservedRunningTime="2025-12-03 00:11:23.811275767 +0000 UTC m=+282.591710025" Dec 03 00:11:25 crc kubenswrapper[3561]: I1203 00:11:25.122396 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d6mlm" Dec 03 00:11:25 crc kubenswrapper[3561]: I1203 00:11:25.122984 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d6mlm" Dec 03 00:11:25 crc kubenswrapper[3561]: I1203 00:11:25.223380 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d6mlm" Dec 03 00:11:29 crc kubenswrapper[3561]: I1203 00:11:29.685293 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lms5f" Dec 03 00:11:29 crc kubenswrapper[3561]: I1203 00:11:29.770599 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lms5f" Dec 03 00:11:29 crc kubenswrapper[3561]: I1203 00:11:29.824724 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lms5f"] Dec 03 00:11:30 crc kubenswrapper[3561]: I1203 00:11:30.825830 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lms5f" podUID="6fa8e8db-f102-4f0c-9086-da639d8f90e2" containerName="registry-server" containerID="cri-o://18440e1bc38ef06ee542d7d3c4ecfaa99c398540478036aa61d2ac6c775cdaf4" gracePeriod=2 Dec 03 00:11:32 crc kubenswrapper[3561]: I1203 00:11:32.584862 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:11:34 crc kubenswrapper[3561]: I1203 00:11:34.848506 3561 generic.go:334] "Generic (PLEG): container finished" podID="6fa8e8db-f102-4f0c-9086-da639d8f90e2" containerID="18440e1bc38ef06ee542d7d3c4ecfaa99c398540478036aa61d2ac6c775cdaf4" exitCode=0 Dec 03 00:11:34 crc kubenswrapper[3561]: I1203 00:11:34.848584 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lms5f" event={"ID":"6fa8e8db-f102-4f0c-9086-da639d8f90e2","Type":"ContainerDied","Data":"18440e1bc38ef06ee542d7d3c4ecfaa99c398540478036aa61d2ac6c775cdaf4"} Dec 03 00:11:35 crc kubenswrapper[3561]: I1203 00:11:35.193628 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lms5f" Dec 03 00:11:35 crc kubenswrapper[3561]: I1203 00:11:35.239422 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d6mlm" Dec 03 00:11:35 crc kubenswrapper[3561]: I1203 00:11:35.285980 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fa8e8db-f102-4f0c-9086-da639d8f90e2-catalog-content\") pod \"6fa8e8db-f102-4f0c-9086-da639d8f90e2\" (UID: \"6fa8e8db-f102-4f0c-9086-da639d8f90e2\") " Dec 03 00:11:35 crc kubenswrapper[3561]: I1203 00:11:35.286061 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fa8e8db-f102-4f0c-9086-da639d8f90e2-utilities\") pod \"6fa8e8db-f102-4f0c-9086-da639d8f90e2\" (UID: \"6fa8e8db-f102-4f0c-9086-da639d8f90e2\") " Dec 03 00:11:35 crc kubenswrapper[3561]: I1203 00:11:35.286101 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsfsl\" (UniqueName: \"kubernetes.io/projected/6fa8e8db-f102-4f0c-9086-da639d8f90e2-kube-api-access-tsfsl\") pod \"6fa8e8db-f102-4f0c-9086-da639d8f90e2\" (UID: \"6fa8e8db-f102-4f0c-9086-da639d8f90e2\") " Dec 03 00:11:35 crc kubenswrapper[3561]: I1203 00:11:35.287707 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fa8e8db-f102-4f0c-9086-da639d8f90e2-utilities" (OuterVolumeSpecName: "utilities") pod "6fa8e8db-f102-4f0c-9086-da639d8f90e2" (UID: "6fa8e8db-f102-4f0c-9086-da639d8f90e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:11:35 crc kubenswrapper[3561]: I1203 00:11:35.291910 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fa8e8db-f102-4f0c-9086-da639d8f90e2-kube-api-access-tsfsl" (OuterVolumeSpecName: "kube-api-access-tsfsl") pod "6fa8e8db-f102-4f0c-9086-da639d8f90e2" (UID: "6fa8e8db-f102-4f0c-9086-da639d8f90e2"). InnerVolumeSpecName "kube-api-access-tsfsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:11:35 crc kubenswrapper[3561]: I1203 00:11:35.387944 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fa8e8db-f102-4f0c-9086-da639d8f90e2-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:35 crc kubenswrapper[3561]: I1203 00:11:35.387983 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tsfsl\" (UniqueName: \"kubernetes.io/projected/6fa8e8db-f102-4f0c-9086-da639d8f90e2-kube-api-access-tsfsl\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:35 crc kubenswrapper[3561]: I1203 00:11:35.857645 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lms5f" event={"ID":"6fa8e8db-f102-4f0c-9086-da639d8f90e2","Type":"ContainerDied","Data":"2230617edbaa829a4a6acfdbbc03b9dc31f70a2440e5508357f98f71721fc834"} Dec 03 00:11:35 crc kubenswrapper[3561]: I1203 00:11:35.857698 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lms5f" Dec 03 00:11:35 crc kubenswrapper[3561]: I1203 00:11:35.857739 3561 scope.go:117] "RemoveContainer" containerID="18440e1bc38ef06ee542d7d3c4ecfaa99c398540478036aa61d2ac6c775cdaf4" Dec 03 00:11:35 crc kubenswrapper[3561]: I1203 00:11:35.862687 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fa8e8db-f102-4f0c-9086-da639d8f90e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6fa8e8db-f102-4f0c-9086-da639d8f90e2" (UID: "6fa8e8db-f102-4f0c-9086-da639d8f90e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:11:35 crc kubenswrapper[3561]: I1203 00:11:35.894149 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fa8e8db-f102-4f0c-9086-da639d8f90e2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:35 crc kubenswrapper[3561]: I1203 00:11:35.899090 3561 scope.go:117] "RemoveContainer" containerID="2b3a566ca8d6f32bfbf59e7118ee736e95d7b02b1dd756a05022ae46a402f954" Dec 03 00:11:35 crc kubenswrapper[3561]: I1203 00:11:35.948300 3561 scope.go:117] "RemoveContainer" containerID="dbab4bb0d1022385bed936fc69706bdbe24b3b8d8f7e5de1c906b4ad0ad9a320" Dec 03 00:11:36 crc kubenswrapper[3561]: I1203 00:11:36.202989 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lms5f"] Dec 03 00:11:36 crc kubenswrapper[3561]: I1203 00:11:36.206906 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lms5f"] Dec 03 00:11:37 crc kubenswrapper[3561]: I1203 00:11:37.672692 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fa8e8db-f102-4f0c-9086-da639d8f90e2" path="/var/lib/kubelet/pods/6fa8e8db-f102-4f0c-9086-da639d8f90e2/volumes" Dec 03 00:11:41 crc kubenswrapper[3561]: I1203 00:11:41.503210 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:11:41 crc kubenswrapper[3561]: I1203 00:11:41.503693 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:11:41 crc kubenswrapper[3561]: I1203 00:11:41.503733 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:11:41 crc kubenswrapper[3561]: I1203 00:11:41.503783 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:11:41 crc kubenswrapper[3561]: I1203 00:11:41.503824 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:11:41 crc kubenswrapper[3561]: E1203 00:11:41.778569 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff\": container with ID starting with a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff not found: ID does not exist" containerID="a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff" Dec 03 00:11:41 crc kubenswrapper[3561]: I1203 00:11:41.778618 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff" err="rpc error: code = NotFound desc = could not find container \"a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff\": container with ID starting with a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff not found: ID does not exist" Dec 03 00:11:41 crc kubenswrapper[3561]: E1203 00:11:41.779165 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649\": container with ID starting with 79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649 not found: ID does not exist" containerID="79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649" Dec 03 00:11:41 crc kubenswrapper[3561]: I1203 00:11:41.779217 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649" err="rpc error: code = NotFound desc = could not find container \"79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649\": container with ID starting with 79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649 not found: ID does not exist" Dec 03 00:11:41 crc kubenswrapper[3561]: E1203 00:11:41.779706 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843\": container with ID starting with 58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843 not found: ID does not exist" containerID="58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843" Dec 03 00:11:41 crc kubenswrapper[3561]: I1203 00:11:41.779924 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843" err="rpc error: code = NotFound desc = could not find container \"58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843\": container with ID starting with 58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843 not found: ID does not exist" Dec 03 00:11:41 crc kubenswrapper[3561]: E1203 00:11:41.780597 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786\": container with ID starting with f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786 not found: ID does not exist" containerID="f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786" Dec 03 00:11:41 crc kubenswrapper[3561]: I1203 00:11:41.780638 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786" err="rpc error: code = NotFound desc = could not find container \"f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786\": container with ID starting with f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786 not found: ID does not exist" Dec 03 00:11:41 crc kubenswrapper[3561]: E1203 00:11:41.781076 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f\": container with ID starting with 821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f not found: ID does not exist" containerID="821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f" Dec 03 00:11:41 crc kubenswrapper[3561]: I1203 00:11:41.781104 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f" err="rpc error: code = NotFound desc = could not find container \"821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f\": container with ID starting with 821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f not found: ID does not exist" Dec 03 00:11:41 crc kubenswrapper[3561]: E1203 00:11:41.782611 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9\": container with ID starting with 2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9 not found: ID does not exist" containerID="2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9" Dec 03 00:11:41 crc kubenswrapper[3561]: I1203 00:11:41.782644 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9" err="rpc error: code = NotFound desc = could not find container \"2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9\": container with ID starting with 2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9 not found: ID does not exist" Dec 03 00:11:41 crc kubenswrapper[3561]: E1203 00:11:41.783083 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077\": container with ID starting with ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077 not found: ID does not exist" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Dec 03 00:11:41 crc kubenswrapper[3561]: I1203 00:11:41.783130 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" err="rpc error: code = NotFound desc = could not find container \"ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077\": container with ID starting with ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077 not found: ID does not exist" Dec 03 00:11:41 crc kubenswrapper[3561]: E1203 00:11:41.784473 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc\": container with ID starting with 0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc not found: ID does not exist" containerID="0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc" Dec 03 00:11:41 crc kubenswrapper[3561]: I1203 00:11:41.784499 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc" err="rpc error: code = NotFound desc = could not find container \"0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc\": container with ID starting with 0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc not found: ID does not exist" Dec 03 00:11:41 crc kubenswrapper[3561]: E1203 00:11:41.784952 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963\": container with ID starting with c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963 not found: ID does not exist" containerID="c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963" Dec 03 00:11:41 crc kubenswrapper[3561]: I1203 00:11:41.784986 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963" err="rpc error: code = NotFound desc = could not find container \"c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963\": container with ID starting with c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963 not found: ID does not exist" Dec 03 00:11:41 crc kubenswrapper[3561]: E1203 00:11:41.785506 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0\": container with ID starting with 955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0 not found: ID does not exist" containerID="955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0" Dec 03 00:11:41 crc kubenswrapper[3561]: I1203 00:11:41.785531 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0" err="rpc error: code = NotFound desc = could not find container \"955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0\": container with ID starting with 955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0 not found: ID does not exist" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.085110 3561 generic.go:334] "Generic (PLEG): container finished" podID="a5813daf-5020-40ff-9715-a2ce6abf39c3" containerID="3f1b679263efce6e3f60928be2de5e3763bbc695f55148dbb64b676393b29e1e" exitCode=0 Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.085161 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29412000-czw72" event={"ID":"a5813daf-5020-40ff-9715-a2ce6abf39c3","Type":"ContainerDied","Data":"3f1b679263efce6e3f60928be2de5e3763bbc695f55148dbb64b676393b29e1e"} Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.639034 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-6qjcs"] Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.639161 3561 topology_manager.go:215] "Topology Admit Handler" podUID="d481c4d8-22a9-41c8-9707-8642780a178a" podNamespace="openshift-multus" podName="cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:14 crc kubenswrapper[3561]: E1203 00:12:14.639370 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6fa8e8db-f102-4f0c-9086-da639d8f90e2" containerName="extract-content" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.639392 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa8e8db-f102-4f0c-9086-da639d8f90e2" containerName="extract-content" Dec 03 00:12:14 crc kubenswrapper[3561]: E1203 00:12:14.639412 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6fa8e8db-f102-4f0c-9086-da639d8f90e2" containerName="extract-utilities" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.639422 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa8e8db-f102-4f0c-9086-da639d8f90e2" containerName="extract-utilities" Dec 03 00:12:14 crc kubenswrapper[3561]: E1203 00:12:14.639440 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6fa8e8db-f102-4f0c-9086-da639d8f90e2" containerName="registry-server" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.639449 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa8e8db-f102-4f0c-9086-da639d8f90e2" containerName="registry-server" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.639663 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fa8e8db-f102-4f0c-9086-da639d8f90e2" containerName="registry-server" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.640261 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.642346 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-smth4" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.643207 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.682216 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d481c4d8-22a9-41c8-9707-8642780a178a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-6qjcs\" (UID: \"d481c4d8-22a9-41c8-9707-8642780a178a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.682290 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d481c4d8-22a9-41c8-9707-8642780a178a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-6qjcs\" (UID: \"d481c4d8-22a9-41c8-9707-8642780a178a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.682410 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d481c4d8-22a9-41c8-9707-8642780a178a-ready\") pod \"cni-sysctl-allowlist-ds-6qjcs\" (UID: \"d481c4d8-22a9-41c8-9707-8642780a178a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.682456 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk75d\" (UniqueName: \"kubernetes.io/projected/d481c4d8-22a9-41c8-9707-8642780a178a-kube-api-access-tk75d\") pod \"cni-sysctl-allowlist-ds-6qjcs\" (UID: \"d481c4d8-22a9-41c8-9707-8642780a178a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.782977 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d481c4d8-22a9-41c8-9707-8642780a178a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-6qjcs\" (UID: \"d481c4d8-22a9-41c8-9707-8642780a178a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.783068 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d481c4d8-22a9-41c8-9707-8642780a178a-ready\") pod \"cni-sysctl-allowlist-ds-6qjcs\" (UID: \"d481c4d8-22a9-41c8-9707-8642780a178a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.783095 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tk75d\" (UniqueName: \"kubernetes.io/projected/d481c4d8-22a9-41c8-9707-8642780a178a-kube-api-access-tk75d\") pod \"cni-sysctl-allowlist-ds-6qjcs\" (UID: \"d481c4d8-22a9-41c8-9707-8642780a178a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.783122 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d481c4d8-22a9-41c8-9707-8642780a178a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-6qjcs\" (UID: \"d481c4d8-22a9-41c8-9707-8642780a178a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.783187 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d481c4d8-22a9-41c8-9707-8642780a178a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-6qjcs\" (UID: \"d481c4d8-22a9-41c8-9707-8642780a178a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.783693 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d481c4d8-22a9-41c8-9707-8642780a178a-ready\") pod \"cni-sysctl-allowlist-ds-6qjcs\" (UID: \"d481c4d8-22a9-41c8-9707-8642780a178a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.783945 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d481c4d8-22a9-41c8-9707-8642780a178a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-6qjcs\" (UID: \"d481c4d8-22a9-41c8-9707-8642780a178a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.804949 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk75d\" (UniqueName: \"kubernetes.io/projected/d481c4d8-22a9-41c8-9707-8642780a178a-kube-api-access-tk75d\") pod \"cni-sysctl-allowlist-ds-6qjcs\" (UID: \"d481c4d8-22a9-41c8-9707-8642780a178a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:14 crc kubenswrapper[3561]: I1203 00:12:14.958113 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:15 crc kubenswrapper[3561]: I1203 00:12:15.093792 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" event={"ID":"d481c4d8-22a9-41c8-9707-8642780a178a","Type":"ContainerStarted","Data":"da3a1d2bb4812f01801ad676c85e832f84c8059d8b0ec6428433786b995d1246"} Dec 03 00:12:15 crc kubenswrapper[3561]: I1203 00:12:15.287647 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29412000-czw72" Dec 03 00:12:15 crc kubenswrapper[3561]: I1203 00:12:15.389236 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a5813daf-5020-40ff-9715-a2ce6abf39c3-serviceca\") pod \"a5813daf-5020-40ff-9715-a2ce6abf39c3\" (UID: \"a5813daf-5020-40ff-9715-a2ce6abf39c3\") " Dec 03 00:12:15 crc kubenswrapper[3561]: I1203 00:12:15.389410 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j97j9\" (UniqueName: \"kubernetes.io/projected/a5813daf-5020-40ff-9715-a2ce6abf39c3-kube-api-access-j97j9\") pod \"a5813daf-5020-40ff-9715-a2ce6abf39c3\" (UID: \"a5813daf-5020-40ff-9715-a2ce6abf39c3\") " Dec 03 00:12:15 crc kubenswrapper[3561]: I1203 00:12:15.390360 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5813daf-5020-40ff-9715-a2ce6abf39c3-serviceca" (OuterVolumeSpecName: "serviceca") pod "a5813daf-5020-40ff-9715-a2ce6abf39c3" (UID: "a5813daf-5020-40ff-9715-a2ce6abf39c3"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:12:15 crc kubenswrapper[3561]: I1203 00:12:15.395398 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5813daf-5020-40ff-9715-a2ce6abf39c3-kube-api-access-j97j9" (OuterVolumeSpecName: "kube-api-access-j97j9") pod "a5813daf-5020-40ff-9715-a2ce6abf39c3" (UID: "a5813daf-5020-40ff-9715-a2ce6abf39c3"). InnerVolumeSpecName "kube-api-access-j97j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:12:15 crc kubenswrapper[3561]: I1203 00:12:15.490657 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j97j9\" (UniqueName: \"kubernetes.io/projected/a5813daf-5020-40ff-9715-a2ce6abf39c3-kube-api-access-j97j9\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:15 crc kubenswrapper[3561]: I1203 00:12:15.490696 3561 reconciler_common.go:300] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a5813daf-5020-40ff-9715-a2ce6abf39c3-serviceca\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:16 crc kubenswrapper[3561]: I1203 00:12:16.099822 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29412000-czw72" event={"ID":"a5813daf-5020-40ff-9715-a2ce6abf39c3","Type":"ContainerDied","Data":"ea995d4f890e068e65b24d1fd74f83d8caacdb7298791c38e7e9801dbe2c0cc2"} Dec 03 00:12:16 crc kubenswrapper[3561]: I1203 00:12:16.100124 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea995d4f890e068e65b24d1fd74f83d8caacdb7298791c38e7e9801dbe2c0cc2" Dec 03 00:12:16 crc kubenswrapper[3561]: I1203 00:12:16.099824 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29412000-czw72" Dec 03 00:12:16 crc kubenswrapper[3561]: I1203 00:12:16.101654 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" event={"ID":"d481c4d8-22a9-41c8-9707-8642780a178a","Type":"ContainerStarted","Data":"50ec08c89e87b466ab5757bf580922cec8295d7eac3615fb0bcdb6f20c844ba9"} Dec 03 00:12:16 crc kubenswrapper[3561]: I1203 00:12:16.152599 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" podStartSLOduration=2.152486164 podStartE2EDuration="2.152486164s" podCreationTimestamp="2025-12-03 00:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:12:16.148880561 +0000 UTC m=+334.929314819" watchObservedRunningTime="2025-12-03 00:12:16.152486164 +0000 UTC m=+334.932920422" Dec 03 00:12:17 crc kubenswrapper[3561]: I1203 00:12:17.107196 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:17 crc kubenswrapper[3561]: I1203 00:12:17.172327 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:17 crc kubenswrapper[3561]: I1203 00:12:17.679167 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-6qjcs"] Dec 03 00:12:19 crc kubenswrapper[3561]: I1203 00:12:19.115183 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" podUID="d481c4d8-22a9-41c8-9707-8642780a178a" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://50ec08c89e87b466ab5757bf580922cec8295d7eac3615fb0bcdb6f20c844ba9" gracePeriod=30 Dec 03 00:12:24 crc kubenswrapper[3561]: E1203 00:12:24.962786 3561 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50ec08c89e87b466ab5757bf580922cec8295d7eac3615fb0bcdb6f20c844ba9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 00:12:24 crc kubenswrapper[3561]: E1203 00:12:24.965863 3561 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50ec08c89e87b466ab5757bf580922cec8295d7eac3615fb0bcdb6f20c844ba9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 00:12:24 crc kubenswrapper[3561]: E1203 00:12:24.966764 3561 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50ec08c89e87b466ab5757bf580922cec8295d7eac3615fb0bcdb6f20c844ba9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 00:12:24 crc kubenswrapper[3561]: E1203 00:12:24.966888 3561 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" podUID="d481c4d8-22a9-41c8-9707-8642780a178a" containerName="kube-multus-additional-cni-plugins" Dec 03 00:12:34 crc kubenswrapper[3561]: E1203 00:12:34.961614 3561 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50ec08c89e87b466ab5757bf580922cec8295d7eac3615fb0bcdb6f20c844ba9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 00:12:34 crc kubenswrapper[3561]: E1203 00:12:34.964645 3561 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50ec08c89e87b466ab5757bf580922cec8295d7eac3615fb0bcdb6f20c844ba9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 00:12:34 crc kubenswrapper[3561]: E1203 00:12:34.966349 3561 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50ec08c89e87b466ab5757bf580922cec8295d7eac3615fb0bcdb6f20c844ba9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 00:12:34 crc kubenswrapper[3561]: E1203 00:12:34.966386 3561 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" podUID="d481c4d8-22a9-41c8-9707-8642780a178a" containerName="kube-multus-additional-cni-plugins" Dec 03 00:12:41 crc kubenswrapper[3561]: I1203 00:12:41.504768 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:12:41 crc kubenswrapper[3561]: I1203 00:12:41.506404 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:12:41 crc kubenswrapper[3561]: I1203 00:12:41.506527 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:12:41 crc kubenswrapper[3561]: I1203 00:12:41.506724 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:12:41 crc kubenswrapper[3561]: I1203 00:12:41.506891 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:12:44 crc kubenswrapper[3561]: E1203 00:12:44.962364 3561 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50ec08c89e87b466ab5757bf580922cec8295d7eac3615fb0bcdb6f20c844ba9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 00:12:44 crc kubenswrapper[3561]: E1203 00:12:44.965916 3561 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50ec08c89e87b466ab5757bf580922cec8295d7eac3615fb0bcdb6f20c844ba9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 00:12:44 crc kubenswrapper[3561]: E1203 00:12:44.967137 3561 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50ec08c89e87b466ab5757bf580922cec8295d7eac3615fb0bcdb6f20c844ba9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 00:12:44 crc kubenswrapper[3561]: E1203 00:12:44.967191 3561 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" podUID="d481c4d8-22a9-41c8-9707-8642780a178a" containerName="kube-multus-additional-cni-plugins" Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.254923 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-6qjcs_d481c4d8-22a9-41c8-9707-8642780a178a/kube-multus-additional-cni-plugins/0.log" Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.255365 3561 generic.go:334] "Generic (PLEG): container finished" podID="d481c4d8-22a9-41c8-9707-8642780a178a" containerID="50ec08c89e87b466ab5757bf580922cec8295d7eac3615fb0bcdb6f20c844ba9" exitCode=137 Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.255390 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" event={"ID":"d481c4d8-22a9-41c8-9707-8642780a178a","Type":"ContainerDied","Data":"50ec08c89e87b466ab5757bf580922cec8295d7eac3615fb0bcdb6f20c844ba9"} Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.255408 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" event={"ID":"d481c4d8-22a9-41c8-9707-8642780a178a","Type":"ContainerDied","Data":"da3a1d2bb4812f01801ad676c85e832f84c8059d8b0ec6428433786b995d1246"} Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.255418 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da3a1d2bb4812f01801ad676c85e832f84c8059d8b0ec6428433786b995d1246" Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.284411 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-6qjcs_d481c4d8-22a9-41c8-9707-8642780a178a/kube-multus-additional-cni-plugins/0.log" Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.284476 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.392875 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d481c4d8-22a9-41c8-9707-8642780a178a-ready\") pod \"d481c4d8-22a9-41c8-9707-8642780a178a\" (UID: \"d481c4d8-22a9-41c8-9707-8642780a178a\") " Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.392948 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk75d\" (UniqueName: \"kubernetes.io/projected/d481c4d8-22a9-41c8-9707-8642780a178a-kube-api-access-tk75d\") pod \"d481c4d8-22a9-41c8-9707-8642780a178a\" (UID: \"d481c4d8-22a9-41c8-9707-8642780a178a\") " Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.393033 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d481c4d8-22a9-41c8-9707-8642780a178a-tuning-conf-dir\") pod \"d481c4d8-22a9-41c8-9707-8642780a178a\" (UID: \"d481c4d8-22a9-41c8-9707-8642780a178a\") " Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.393066 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d481c4d8-22a9-41c8-9707-8642780a178a-cni-sysctl-allowlist\") pod \"d481c4d8-22a9-41c8-9707-8642780a178a\" (UID: \"d481c4d8-22a9-41c8-9707-8642780a178a\") " Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.393504 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d481c4d8-22a9-41c8-9707-8642780a178a-ready" (OuterVolumeSpecName: "ready") pod "d481c4d8-22a9-41c8-9707-8642780a178a" (UID: "d481c4d8-22a9-41c8-9707-8642780a178a"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.393565 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d481c4d8-22a9-41c8-9707-8642780a178a-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "d481c4d8-22a9-41c8-9707-8642780a178a" (UID: "d481c4d8-22a9-41c8-9707-8642780a178a"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.393733 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d481c4d8-22a9-41c8-9707-8642780a178a-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "d481c4d8-22a9-41c8-9707-8642780a178a" (UID: "d481c4d8-22a9-41c8-9707-8642780a178a"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.404823 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d481c4d8-22a9-41c8-9707-8642780a178a-kube-api-access-tk75d" (OuterVolumeSpecName: "kube-api-access-tk75d") pod "d481c4d8-22a9-41c8-9707-8642780a178a" (UID: "d481c4d8-22a9-41c8-9707-8642780a178a"). InnerVolumeSpecName "kube-api-access-tk75d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.494034 3561 reconciler_common.go:300] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d481c4d8-22a9-41c8-9707-8642780a178a-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.494069 3561 reconciler_common.go:300] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d481c4d8-22a9-41c8-9707-8642780a178a-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.494080 3561 reconciler_common.go:300] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d481c4d8-22a9-41c8-9707-8642780a178a-ready\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:49 crc kubenswrapper[3561]: I1203 00:12:49.494090 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tk75d\" (UniqueName: \"kubernetes.io/projected/d481c4d8-22a9-41c8-9707-8642780a178a-kube-api-access-tk75d\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:50 crc kubenswrapper[3561]: I1203 00:12:50.259481 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-6qjcs" Dec 03 00:12:50 crc kubenswrapper[3561]: I1203 00:12:50.276188 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-6qjcs"] Dec 03 00:12:50 crc kubenswrapper[3561]: I1203 00:12:50.281000 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-6qjcs"] Dec 03 00:12:51 crc kubenswrapper[3561]: I1203 00:12:51.671981 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d481c4d8-22a9-41c8-9707-8642780a178a" path="/var/lib/kubelet/pods/d481c4d8-22a9-41c8-9707-8642780a178a/volumes" Dec 03 00:12:57 crc kubenswrapper[3561]: I1203 00:12:57.623057 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:12:57 crc kubenswrapper[3561]: I1203 00:12:57.623383 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:13:27 crc kubenswrapper[3561]: I1203 00:13:27.623464 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:13:27 crc kubenswrapper[3561]: I1203 00:13:27.624311 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:13:41 crc kubenswrapper[3561]: I1203 00:13:41.507983 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:13:41 crc kubenswrapper[3561]: I1203 00:13:41.508860 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:13:41 crc kubenswrapper[3561]: I1203 00:13:41.508912 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:13:41 crc kubenswrapper[3561]: I1203 00:13:41.508974 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:13:41 crc kubenswrapper[3561]: I1203 00:13:41.509031 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:13:57 crc kubenswrapper[3561]: I1203 00:13:57.623611 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:13:57 crc kubenswrapper[3561]: I1203 00:13:57.624338 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:13:57 crc kubenswrapper[3561]: I1203 00:13:57.624387 3561 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:13:57 crc kubenswrapper[3561]: I1203 00:13:57.625383 3561 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15e97c832b1edd5118dd5b70cf73c62c293a622f94794b4b5fd4db37a2862e9f"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:13:57 crc kubenswrapper[3561]: I1203 00:13:57.625658 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://15e97c832b1edd5118dd5b70cf73c62c293a622f94794b4b5fd4db37a2862e9f" gracePeriod=600 Dec 03 00:13:58 crc kubenswrapper[3561]: I1203 00:13:58.666875 3561 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="15e97c832b1edd5118dd5b70cf73c62c293a622f94794b4b5fd4db37a2862e9f" exitCode=0 Dec 03 00:13:58 crc kubenswrapper[3561]: I1203 00:13:58.666951 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"15e97c832b1edd5118dd5b70cf73c62c293a622f94794b4b5fd4db37a2862e9f"} Dec 03 00:13:58 crc kubenswrapper[3561]: I1203 00:13:58.667259 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"ffd7b60aaa4fceea735c7b0851d00a85fc76af1d7c20f8f90f8923adac5c0481"} Dec 03 00:13:58 crc kubenswrapper[3561]: I1203 00:13:58.667287 3561 scope.go:117] "RemoveContainer" containerID="113805abfdc6c501aa825a452eb1d62ca3a6d97dc80e8b0884d3cb087f419251" Dec 03 00:14:41 crc kubenswrapper[3561]: I1203 00:14:41.509963 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:14:41 crc kubenswrapper[3561]: I1203 00:14:41.510890 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:14:41 crc kubenswrapper[3561]: I1203 00:14:41.510927 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:14:41 crc kubenswrapper[3561]: I1203 00:14:41.510972 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:14:41 crc kubenswrapper[3561]: I1203 00:14:41.511014 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.158904 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz"] Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.159581 3561 topology_manager.go:215] "Topology Admit Handler" podUID="df0f35a7-1f98-455e-bfcf-19c9d614d990" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29412015-h79mz" Dec 03 00:15:00 crc kubenswrapper[3561]: E1203 00:15:00.252898 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="d481c4d8-22a9-41c8-9707-8642780a178a" containerName="kube-multus-additional-cni-plugins" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.253023 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="d481c4d8-22a9-41c8-9707-8642780a178a" containerName="kube-multus-additional-cni-plugins" Dec 03 00:15:00 crc kubenswrapper[3561]: E1203 00:15:00.253154 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a5813daf-5020-40ff-9715-a2ce6abf39c3" containerName="image-pruner" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.253166 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5813daf-5020-40ff-9715-a2ce6abf39c3" containerName="image-pruner" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.253466 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="d481c4d8-22a9-41c8-9707-8642780a178a" containerName="kube-multus-additional-cni-plugins" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.253488 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5813daf-5020-40ff-9715-a2ce6abf39c3" containerName="image-pruner" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.254027 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.256940 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.257216 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.268857 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz"] Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.372463 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df0f35a7-1f98-455e-bfcf-19c9d614d990-config-volume\") pod \"collect-profiles-29412015-h79mz\" (UID: \"df0f35a7-1f98-455e-bfcf-19c9d614d990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.372563 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df0f35a7-1f98-455e-bfcf-19c9d614d990-secret-volume\") pod \"collect-profiles-29412015-h79mz\" (UID: \"df0f35a7-1f98-455e-bfcf-19c9d614d990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.372594 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9br92\" (UniqueName: \"kubernetes.io/projected/df0f35a7-1f98-455e-bfcf-19c9d614d990-kube-api-access-9br92\") pod \"collect-profiles-29412015-h79mz\" (UID: \"df0f35a7-1f98-455e-bfcf-19c9d614d990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.474315 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df0f35a7-1f98-455e-bfcf-19c9d614d990-secret-volume\") pod \"collect-profiles-29412015-h79mz\" (UID: \"df0f35a7-1f98-455e-bfcf-19c9d614d990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.474371 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9br92\" (UniqueName: \"kubernetes.io/projected/df0f35a7-1f98-455e-bfcf-19c9d614d990-kube-api-access-9br92\") pod \"collect-profiles-29412015-h79mz\" (UID: \"df0f35a7-1f98-455e-bfcf-19c9d614d990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.474409 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df0f35a7-1f98-455e-bfcf-19c9d614d990-config-volume\") pod \"collect-profiles-29412015-h79mz\" (UID: \"df0f35a7-1f98-455e-bfcf-19c9d614d990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.475481 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df0f35a7-1f98-455e-bfcf-19c9d614d990-config-volume\") pod \"collect-profiles-29412015-h79mz\" (UID: \"df0f35a7-1f98-455e-bfcf-19c9d614d990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.483266 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df0f35a7-1f98-455e-bfcf-19c9d614d990-secret-volume\") pod \"collect-profiles-29412015-h79mz\" (UID: \"df0f35a7-1f98-455e-bfcf-19c9d614d990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.495036 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9br92\" (UniqueName: \"kubernetes.io/projected/df0f35a7-1f98-455e-bfcf-19c9d614d990-kube-api-access-9br92\") pod \"collect-profiles-29412015-h79mz\" (UID: \"df0f35a7-1f98-455e-bfcf-19c9d614d990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz" Dec 03 00:15:00 crc kubenswrapper[3561]: I1203 00:15:00.585456 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz" Dec 03 00:15:01 crc kubenswrapper[3561]: I1203 00:15:01.185090 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz"] Dec 03 00:15:02 crc kubenswrapper[3561]: I1203 00:15:02.137018 3561 generic.go:334] "Generic (PLEG): container finished" podID="df0f35a7-1f98-455e-bfcf-19c9d614d990" containerID="94169b68245d1319fec22a35fa312122cdc514063848072b69b4b839aa983bbc" exitCode=0 Dec 03 00:15:02 crc kubenswrapper[3561]: I1203 00:15:02.137096 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz" event={"ID":"df0f35a7-1f98-455e-bfcf-19c9d614d990","Type":"ContainerDied","Data":"94169b68245d1319fec22a35fa312122cdc514063848072b69b4b839aa983bbc"} Dec 03 00:15:02 crc kubenswrapper[3561]: I1203 00:15:02.137407 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz" event={"ID":"df0f35a7-1f98-455e-bfcf-19c9d614d990","Type":"ContainerStarted","Data":"827ddce56b12cee1f8fbaaba19ca522750faa49e16ea69019ae12e5592bde6ce"} Dec 03 00:15:03 crc kubenswrapper[3561]: I1203 00:15:03.412910 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz" Dec 03 00:15:03 crc kubenswrapper[3561]: I1203 00:15:03.523225 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df0f35a7-1f98-455e-bfcf-19c9d614d990-config-volume\") pod \"df0f35a7-1f98-455e-bfcf-19c9d614d990\" (UID: \"df0f35a7-1f98-455e-bfcf-19c9d614d990\") " Dec 03 00:15:03 crc kubenswrapper[3561]: I1203 00:15:03.523709 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9br92\" (UniqueName: \"kubernetes.io/projected/df0f35a7-1f98-455e-bfcf-19c9d614d990-kube-api-access-9br92\") pod \"df0f35a7-1f98-455e-bfcf-19c9d614d990\" (UID: \"df0f35a7-1f98-455e-bfcf-19c9d614d990\") " Dec 03 00:15:03 crc kubenswrapper[3561]: I1203 00:15:03.524072 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df0f35a7-1f98-455e-bfcf-19c9d614d990-secret-volume\") pod \"df0f35a7-1f98-455e-bfcf-19c9d614d990\" (UID: \"df0f35a7-1f98-455e-bfcf-19c9d614d990\") " Dec 03 00:15:03 crc kubenswrapper[3561]: I1203 00:15:03.524148 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0f35a7-1f98-455e-bfcf-19c9d614d990-config-volume" (OuterVolumeSpecName: "config-volume") pod "df0f35a7-1f98-455e-bfcf-19c9d614d990" (UID: "df0f35a7-1f98-455e-bfcf-19c9d614d990"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:15:03 crc kubenswrapper[3561]: I1203 00:15:03.533059 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df0f35a7-1f98-455e-bfcf-19c9d614d990-kube-api-access-9br92" (OuterVolumeSpecName: "kube-api-access-9br92") pod "df0f35a7-1f98-455e-bfcf-19c9d614d990" (UID: "df0f35a7-1f98-455e-bfcf-19c9d614d990"). InnerVolumeSpecName "kube-api-access-9br92". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:15:03 crc kubenswrapper[3561]: I1203 00:15:03.537467 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0f35a7-1f98-455e-bfcf-19c9d614d990-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "df0f35a7-1f98-455e-bfcf-19c9d614d990" (UID: "df0f35a7-1f98-455e-bfcf-19c9d614d990"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:15:03 crc kubenswrapper[3561]: I1203 00:15:03.625180 3561 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df0f35a7-1f98-455e-bfcf-19c9d614d990-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:03 crc kubenswrapper[3561]: I1203 00:15:03.625642 3561 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df0f35a7-1f98-455e-bfcf-19c9d614d990-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:03 crc kubenswrapper[3561]: I1203 00:15:03.625799 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9br92\" (UniqueName: \"kubernetes.io/projected/df0f35a7-1f98-455e-bfcf-19c9d614d990-kube-api-access-9br92\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:04 crc kubenswrapper[3561]: I1203 00:15:04.150037 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz" event={"ID":"df0f35a7-1f98-455e-bfcf-19c9d614d990","Type":"ContainerDied","Data":"827ddce56b12cee1f8fbaaba19ca522750faa49e16ea69019ae12e5592bde6ce"} Dec 03 00:15:04 crc kubenswrapper[3561]: I1203 00:15:04.150070 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-h79mz" Dec 03 00:15:04 crc kubenswrapper[3561]: I1203 00:15:04.150087 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="827ddce56b12cee1f8fbaaba19ca522750faa49e16ea69019ae12e5592bde6ce" Dec 03 00:15:04 crc kubenswrapper[3561]: I1203 00:15:04.538422 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j"] Dec 03 00:15:04 crc kubenswrapper[3561]: I1203 00:15:04.546263 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j"] Dec 03 00:15:05 crc kubenswrapper[3561]: I1203 00:15:05.674190 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" path="/var/lib/kubelet/pods/51936587-a4af-470d-ad92-8ab9062cbc72/volumes" Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.124836 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-13-crc"] Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.125466 3561 topology_manager.go:215] "Topology Admit Handler" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" podNamespace="openshift-kube-apiserver" podName="installer-13-crc" Dec 03 00:15:14 crc kubenswrapper[3561]: E1203 00:15:14.146673 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="df0f35a7-1f98-455e-bfcf-19c9d614d990" containerName="collect-profiles" Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.146994 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0f35a7-1f98-455e-bfcf-19c9d614d990" containerName="collect-profiles" Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.147406 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0f35a7-1f98-455e-bfcf-19c9d614d990" containerName="collect-profiles" Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.148024 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-13-crc"] Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.148284 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-13-crc" Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.154898 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.154966 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-4kgh8" Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.295834 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb0f975b-347e-4c02-8f84-22a14ac75a3c-kube-api-access\") pod \"installer-13-crc\" (UID: \"cb0f975b-347e-4c02-8f84-22a14ac75a3c\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.296703 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cb0f975b-347e-4c02-8f84-22a14ac75a3c-var-lock\") pod \"installer-13-crc\" (UID: \"cb0f975b-347e-4c02-8f84-22a14ac75a3c\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.296843 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb0f975b-347e-4c02-8f84-22a14ac75a3c-kubelet-dir\") pod \"installer-13-crc\" (UID: \"cb0f975b-347e-4c02-8f84-22a14ac75a3c\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.398594 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cb0f975b-347e-4c02-8f84-22a14ac75a3c-var-lock\") pod \"installer-13-crc\" (UID: \"cb0f975b-347e-4c02-8f84-22a14ac75a3c\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.398696 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb0f975b-347e-4c02-8f84-22a14ac75a3c-kubelet-dir\") pod \"installer-13-crc\" (UID: \"cb0f975b-347e-4c02-8f84-22a14ac75a3c\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.398764 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb0f975b-347e-4c02-8f84-22a14ac75a3c-kube-api-access\") pod \"installer-13-crc\" (UID: \"cb0f975b-347e-4c02-8f84-22a14ac75a3c\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.398763 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cb0f975b-347e-4c02-8f84-22a14ac75a3c-var-lock\") pod \"installer-13-crc\" (UID: \"cb0f975b-347e-4c02-8f84-22a14ac75a3c\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.398855 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb0f975b-347e-4c02-8f84-22a14ac75a3c-kubelet-dir\") pod \"installer-13-crc\" (UID: \"cb0f975b-347e-4c02-8f84-22a14ac75a3c\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.431139 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb0f975b-347e-4c02-8f84-22a14ac75a3c-kube-api-access\") pod \"installer-13-crc\" (UID: \"cb0f975b-347e-4c02-8f84-22a14ac75a3c\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.486234 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-13-crc" Dec 03 00:15:14 crc kubenswrapper[3561]: I1203 00:15:14.717001 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-13-crc"] Dec 03 00:15:15 crc kubenswrapper[3561]: I1203 00:15:15.221052 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-13-crc" event={"ID":"cb0f975b-347e-4c02-8f84-22a14ac75a3c","Type":"ContainerStarted","Data":"953f389ae07a4bc3ef18c52f4fa68601e94d06db220431180e8cf12cbe081b61"} Dec 03 00:15:17 crc kubenswrapper[3561]: I1203 00:15:17.235362 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-13-crc" event={"ID":"cb0f975b-347e-4c02-8f84-22a14ac75a3c","Type":"ContainerStarted","Data":"73c9951019c20cea9813f787d59c954479d6ac3f3ece65de89ac59a3049dfea4"} Dec 03 00:15:17 crc kubenswrapper[3561]: I1203 00:15:17.267028 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-13-crc" podStartSLOduration=3.266907076 podStartE2EDuration="3.266907076s" podCreationTimestamp="2025-12-03 00:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:15:17.264194261 +0000 UTC m=+516.044628559" watchObservedRunningTime="2025-12-03 00:15:17.266907076 +0000 UTC m=+516.047341414" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.369612 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-75b7bb6564-7flrt"] Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.369877 3561 topology_manager.go:215] "Topology Admit Handler" podUID="6d64dc54-a515-47d8-b966-189e64d53f6c" podNamespace="openshift-image-registry" podName="image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.371944 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.395962 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-75b7bb6564-7flrt"] Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.451353 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6d64dc54-a515-47d8-b966-189e64d53f6c-registry-tls\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.451416 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q24kk\" (UniqueName: \"kubernetes.io/projected/6d64dc54-a515-47d8-b966-189e64d53f6c-kube-api-access-q24kk\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.451456 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6d64dc54-a515-47d8-b966-189e64d53f6c-installation-pull-secrets\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.451602 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d64dc54-a515-47d8-b966-189e64d53f6c-trusted-ca\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.451652 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6d64dc54-a515-47d8-b966-189e64d53f6c-ca-trust-extracted\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.451688 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d64dc54-a515-47d8-b966-189e64d53f6c-bound-sa-token\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.451817 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.451884 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6d64dc54-a515-47d8-b966-189e64d53f6c-registry-certificates\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.473341 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.553160 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6d64dc54-a515-47d8-b966-189e64d53f6c-registry-certificates\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.553230 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6d64dc54-a515-47d8-b966-189e64d53f6c-registry-tls\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.554530 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-q24kk\" (UniqueName: \"kubernetes.io/projected/6d64dc54-a515-47d8-b966-189e64d53f6c-kube-api-access-q24kk\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.554602 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6d64dc54-a515-47d8-b966-189e64d53f6c-installation-pull-secrets\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.554628 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d64dc54-a515-47d8-b966-189e64d53f6c-trusted-ca\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.554843 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6d64dc54-a515-47d8-b966-189e64d53f6c-registry-certificates\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.555085 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6d64dc54-a515-47d8-b966-189e64d53f6c-ca-trust-extracted\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.555118 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d64dc54-a515-47d8-b966-189e64d53f6c-bound-sa-token\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.555627 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6d64dc54-a515-47d8-b966-189e64d53f6c-ca-trust-extracted\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.555960 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d64dc54-a515-47d8-b966-189e64d53f6c-trusted-ca\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.563471 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6d64dc54-a515-47d8-b966-189e64d53f6c-registry-tls\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.564767 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6d64dc54-a515-47d8-b966-189e64d53f6c-installation-pull-secrets\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.584360 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-q24kk\" (UniqueName: \"kubernetes.io/projected/6d64dc54-a515-47d8-b966-189e64d53f6c-kube-api-access-q24kk\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.598499 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d64dc54-a515-47d8-b966-189e64d53f6c-bound-sa-token\") pod \"image-registry-75b7bb6564-7flrt\" (UID: \"6d64dc54-a515-47d8-b966-189e64d53f6c\") " pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.703286 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:18 crc kubenswrapper[3561]: I1203 00:15:18.965265 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-75b7bb6564-7flrt"] Dec 03 00:15:18 crc kubenswrapper[3561]: W1203 00:15:18.974212 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d64dc54_a515_47d8_b966_189e64d53f6c.slice/crio-80127f0f371ebb045bb6cfb3a7199149b708e87133121277b43de0e699500c2c WatchSource:0}: Error finding container 80127f0f371ebb045bb6cfb3a7199149b708e87133121277b43de0e699500c2c: Status 404 returned error can't find the container with id 80127f0f371ebb045bb6cfb3a7199149b708e87133121277b43de0e699500c2c Dec 03 00:15:19 crc kubenswrapper[3561]: I1203 00:15:19.247128 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" event={"ID":"6d64dc54-a515-47d8-b966-189e64d53f6c","Type":"ContainerStarted","Data":"3681ffdbf7f12acb24539ce886ceedd4d83995b21ff03180098edec2a04b0c15"} Dec 03 00:15:19 crc kubenswrapper[3561]: I1203 00:15:19.247166 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" event={"ID":"6d64dc54-a515-47d8-b966-189e64d53f6c","Type":"ContainerStarted","Data":"80127f0f371ebb045bb6cfb3a7199149b708e87133121277b43de0e699500c2c"} Dec 03 00:15:19 crc kubenswrapper[3561]: I1203 00:15:19.274892 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" podStartSLOduration=1.274848874 podStartE2EDuration="1.274848874s" podCreationTimestamp="2025-12-03 00:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:15:19.273059889 +0000 UTC m=+518.053494147" watchObservedRunningTime="2025-12-03 00:15:19.274848874 +0000 UTC m=+518.055283142" Dec 03 00:15:20 crc kubenswrapper[3561]: I1203 00:15:20.252252 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:22 crc kubenswrapper[3561]: I1203 00:15:22.901126 3561 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Dec 03 00:15:22 crc kubenswrapper[3561]: I1203 00:15:22.901482 3561 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 00:15:34 crc kubenswrapper[3561]: I1203 00:15:34.191167 3561 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Dec 03 00:15:34 crc kubenswrapper[3561]: I1203 00:15:34.222466 3561 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Dec 03 00:15:35 crc kubenswrapper[3561]: I1203 00:15:35.364887 3561 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Dec 03 00:15:38 crc kubenswrapper[3561]: I1203 00:15:38.715636 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-75b7bb6564-7flrt" Dec 03 00:15:39 crc kubenswrapper[3561]: I1203 00:15:39.271200 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-75779c45fd-v2j2v"] Dec 03 00:15:41 crc kubenswrapper[3561]: I1203 00:15:41.511641 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:15:41 crc kubenswrapper[3561]: I1203 00:15:41.511717 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:15:41 crc kubenswrapper[3561]: I1203 00:15:41.511746 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:15:41 crc kubenswrapper[3561]: I1203 00:15:41.511791 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:15:41 crc kubenswrapper[3561]: I1203 00:15:41.511831 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:15:41 crc kubenswrapper[3561]: E1203 00:15:41.848516 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373\": container with ID starting with 13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373 not found: ID does not exist" containerID="13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373" Dec 03 00:15:41 crc kubenswrapper[3561]: I1203 00:15:41.848955 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373" err="rpc error: code = NotFound desc = could not find container \"13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373\": container with ID starting with 13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373 not found: ID does not exist" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.080186 3561 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.080709 3561 topology_manager.go:215] "Topology Admit Handler" podUID="7dae59545f22b3fb679a7fbf878a6379" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.083134 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.083778 3561 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.083842 3561 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.083909 3561 topology_manager.go:215] "Topology Admit Handler" podUID="7f3419c3ca30b18b78e8dd2488b00489" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.084304 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" containerID="cri-o://7b44fa73d3f68543213d024b92ab7ce7fb7d65d0107f504404461c11722595b2" gracePeriod=15 Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.084366 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f978dba5e3e5c0aff07b26d0a8059f01e5fd7ca22a8eef0dd99560149ac353d9" gracePeriod=15 Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.084382 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a96615dc8e87c2cef2cf079ec058cfe28877ea716ecf26bb099234d80853ff0a" gracePeriod=15 Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.084318 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5a004e668c7d85df7568dd1d9ed5860aabc433c48812a25dd28950e163264d75" gracePeriod=15 Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.084347 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9638132d1625507bf3df34a9f12230e8d2de16528f88e84fe4b9b664929bfef3" gracePeriod=15 Dec 03 00:15:55 crc kubenswrapper[3561]: E1203 00:15:55.084211 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.084651 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" Dec 03 00:15:55 crc kubenswrapper[3561]: E1203 00:15:55.084683 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-syncer" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.084701 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-syncer" Dec 03 00:15:55 crc kubenswrapper[3561]: E1203 00:15:55.085077 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="setup" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.085117 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="setup" Dec 03 00:15:55 crc kubenswrapper[3561]: E1203 00:15:55.085160 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.085184 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 00:15:55 crc kubenswrapper[3561]: E1203 00:15:55.085214 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-insecure-readyz" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.085233 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-insecure-readyz" Dec 03 00:15:55 crc kubenswrapper[3561]: E1203 00:15:55.085262 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.085281 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.085603 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.085673 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-insecure-readyz" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.086674 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-syncer" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.086711 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.086728 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.144839 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.212042 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.212105 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.212158 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.212206 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.212239 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.212259 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.212285 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.212310 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.313436 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.313779 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.313521 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.313825 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.313847 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.313853 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.313885 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.313890 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.313918 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.313921 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.313939 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.313952 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.313964 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.313975 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.313990 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.314023 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.446957 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:15:55 crc kubenswrapper[3561]: E1203 00:15:55.492861 3561 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.159:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d8c5befbc2a1c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:7dae59545f22b3fb679a7fbf878a6379,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:15:55.491101212 +0000 UTC m=+554.271535510,LastTimestamp:2025-12-03 00:15:55.491101212 +0000 UTC m=+554.271535510,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.537793 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"7dae59545f22b3fb679a7fbf878a6379","Type":"ContainerStarted","Data":"4bd23870d90c8ec5e086e64068fb9cbfdce6ec34c4feb6d0d459fcecbbbed086"} Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.541891 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_ae85115fdc231b4002b57317b41a6400/kube-apiserver-cert-syncer/2.log" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.542851 3561 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="5a004e668c7d85df7568dd1d9ed5860aabc433c48812a25dd28950e163264d75" exitCode=0 Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.542915 3561 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="9638132d1625507bf3df34a9f12230e8d2de16528f88e84fe4b9b664929bfef3" exitCode=0 Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.542947 3561 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="a96615dc8e87c2cef2cf079ec058cfe28877ea716ecf26bb099234d80853ff0a" exitCode=0 Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.542969 3561 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="f978dba5e3e5c0aff07b26d0a8059f01e5fd7ca22a8eef0dd99560149ac353d9" exitCode=2 Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.545957 3561 generic.go:334] "Generic (PLEG): container finished" podID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" containerID="73c9951019c20cea9813f787d59c954479d6ac3f3ece65de89ac59a3049dfea4" exitCode=0 Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.545993 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-13-crc" event={"ID":"cb0f975b-347e-4c02-8f84-22a14ac75a3c","Type":"ContainerDied","Data":"73c9951019c20cea9813f787d59c954479d6ac3f3ece65de89ac59a3049dfea4"} Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.547071 3561 status_manager.go:853] "Failed to get status for pod" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.547362 3561 status_manager.go:853] "Failed to get status for pod" podUID="ae85115fdc231b4002b57317b41a6400" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:55 crc kubenswrapper[3561]: I1203 00:15:55.547887 3561 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:56 crc kubenswrapper[3561]: I1203 00:15:56.553483 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"7dae59545f22b3fb679a7fbf878a6379","Type":"ContainerStarted","Data":"09842c0ee81950ed48dfbf34429bd12dc06a9d0f9c328e4e6049a4803f9cb055"} Dec 03 00:15:56 crc kubenswrapper[3561]: I1203 00:15:56.554710 3561 status_manager.go:853] "Failed to get status for pod" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:56 crc kubenswrapper[3561]: I1203 00:15:56.555377 3561 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:56 crc kubenswrapper[3561]: I1203 00:15:56.828301 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-13-crc" Dec 03 00:15:56 crc kubenswrapper[3561]: I1203 00:15:56.828965 3561 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:56 crc kubenswrapper[3561]: I1203 00:15:56.829402 3561 status_manager.go:853] "Failed to get status for pod" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:56 crc kubenswrapper[3561]: I1203 00:15:56.933474 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cb0f975b-347e-4c02-8f84-22a14ac75a3c-var-lock\") pod \"cb0f975b-347e-4c02-8f84-22a14ac75a3c\" (UID: \"cb0f975b-347e-4c02-8f84-22a14ac75a3c\") " Dec 03 00:15:56 crc kubenswrapper[3561]: I1203 00:15:56.933635 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb0f975b-347e-4c02-8f84-22a14ac75a3c-kube-api-access\") pod \"cb0f975b-347e-4c02-8f84-22a14ac75a3c\" (UID: \"cb0f975b-347e-4c02-8f84-22a14ac75a3c\") " Dec 03 00:15:56 crc kubenswrapper[3561]: I1203 00:15:56.933680 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb0f975b-347e-4c02-8f84-22a14ac75a3c-kubelet-dir\") pod \"cb0f975b-347e-4c02-8f84-22a14ac75a3c\" (UID: \"cb0f975b-347e-4c02-8f84-22a14ac75a3c\") " Dec 03 00:15:56 crc kubenswrapper[3561]: I1203 00:15:56.933686 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb0f975b-347e-4c02-8f84-22a14ac75a3c-var-lock" (OuterVolumeSpecName: "var-lock") pod "cb0f975b-347e-4c02-8f84-22a14ac75a3c" (UID: "cb0f975b-347e-4c02-8f84-22a14ac75a3c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:56 crc kubenswrapper[3561]: I1203 00:15:56.933955 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb0f975b-347e-4c02-8f84-22a14ac75a3c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cb0f975b-347e-4c02-8f84-22a14ac75a3c" (UID: "cb0f975b-347e-4c02-8f84-22a14ac75a3c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:56 crc kubenswrapper[3561]: I1203 00:15:56.935102 3561 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cb0f975b-347e-4c02-8f84-22a14ac75a3c-var-lock\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:56 crc kubenswrapper[3561]: I1203 00:15:56.935137 3561 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb0f975b-347e-4c02-8f84-22a14ac75a3c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:56 crc kubenswrapper[3561]: I1203 00:15:56.944673 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb0f975b-347e-4c02-8f84-22a14ac75a3c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cb0f975b-347e-4c02-8f84-22a14ac75a3c" (UID: "cb0f975b-347e-4c02-8f84-22a14ac75a3c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.036653 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb0f975b-347e-4c02-8f84-22a14ac75a3c-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.515352 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_ae85115fdc231b4002b57317b41a6400/kube-apiserver-cert-syncer/2.log" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.516692 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.517583 3561 status_manager.go:853] "Failed to get status for pod" podUID="ae85115fdc231b4002b57317b41a6400" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.518156 3561 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.518659 3561 status_manager.go:853] "Failed to get status for pod" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.556143 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"ae85115fdc231b4002b57317b41a6400\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.556267 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"ae85115fdc231b4002b57317b41a6400\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.556286 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ae85115fdc231b4002b57317b41a6400" (UID: "ae85115fdc231b4002b57317b41a6400"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.556391 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"ae85115fdc231b4002b57317b41a6400\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.556431 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ae85115fdc231b4002b57317b41a6400" (UID: "ae85115fdc231b4002b57317b41a6400"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.556531 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "ae85115fdc231b4002b57317b41a6400" (UID: "ae85115fdc231b4002b57317b41a6400"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.556903 3561 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.556937 3561 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.556961 3561 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.561235 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_ae85115fdc231b4002b57317b41a6400/kube-apiserver-cert-syncer/2.log" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.562185 3561 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="7b44fa73d3f68543213d024b92ab7ce7fb7d65d0107f504404461c11722595b2" exitCode=0 Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.562269 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.562316 3561 scope.go:117] "RemoveContainer" containerID="5a004e668c7d85df7568dd1d9ed5860aabc433c48812a25dd28950e163264d75" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.564668 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-13-crc" event={"ID":"cb0f975b-347e-4c02-8f84-22a14ac75a3c","Type":"ContainerDied","Data":"953f389ae07a4bc3ef18c52f4fa68601e94d06db220431180e8cf12cbe081b61"} Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.564715 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="953f389ae07a4bc3ef18c52f4fa68601e94d06db220431180e8cf12cbe081b61" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.564790 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-13-crc" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.565536 3561 status_manager.go:853] "Failed to get status for pod" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.566286 3561 status_manager.go:853] "Failed to get status for pod" podUID="ae85115fdc231b4002b57317b41a6400" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.567005 3561 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.579942 3561 status_manager.go:853] "Failed to get status for pod" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.580609 3561 status_manager.go:853] "Failed to get status for pod" podUID="ae85115fdc231b4002b57317b41a6400" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.581078 3561 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.593342 3561 status_manager.go:853] "Failed to get status for pod" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.594138 3561 status_manager.go:853] "Failed to get status for pod" podUID="ae85115fdc231b4002b57317b41a6400" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.594907 3561 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.601498 3561 scope.go:117] "RemoveContainer" containerID="9638132d1625507bf3df34a9f12230e8d2de16528f88e84fe4b9b664929bfef3" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.656749 3561 scope.go:117] "RemoveContainer" containerID="a96615dc8e87c2cef2cf079ec058cfe28877ea716ecf26bb099234d80853ff0a" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.673651 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae85115fdc231b4002b57317b41a6400" path="/var/lib/kubelet/pods/ae85115fdc231b4002b57317b41a6400/volumes" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.689934 3561 scope.go:117] "RemoveContainer" containerID="f978dba5e3e5c0aff07b26d0a8059f01e5fd7ca22a8eef0dd99560149ac353d9" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.721634 3561 scope.go:117] "RemoveContainer" containerID="7b44fa73d3f68543213d024b92ab7ce7fb7d65d0107f504404461c11722595b2" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.752639 3561 scope.go:117] "RemoveContainer" containerID="299136b4947012b9172489c064874bf7603c2d89776eb9145340e858fe47c952" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.794033 3561 scope.go:117] "RemoveContainer" containerID="5a004e668c7d85df7568dd1d9ed5860aabc433c48812a25dd28950e163264d75" Dec 03 00:15:57 crc kubenswrapper[3561]: E1203 00:15:57.795040 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a004e668c7d85df7568dd1d9ed5860aabc433c48812a25dd28950e163264d75\": container with ID starting with 5a004e668c7d85df7568dd1d9ed5860aabc433c48812a25dd28950e163264d75 not found: ID does not exist" containerID="5a004e668c7d85df7568dd1d9ed5860aabc433c48812a25dd28950e163264d75" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.795131 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a004e668c7d85df7568dd1d9ed5860aabc433c48812a25dd28950e163264d75"} err="failed to get container status \"5a004e668c7d85df7568dd1d9ed5860aabc433c48812a25dd28950e163264d75\": rpc error: code = NotFound desc = could not find container \"5a004e668c7d85df7568dd1d9ed5860aabc433c48812a25dd28950e163264d75\": container with ID starting with 5a004e668c7d85df7568dd1d9ed5860aabc433c48812a25dd28950e163264d75 not found: ID does not exist" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.795159 3561 scope.go:117] "RemoveContainer" containerID="9638132d1625507bf3df34a9f12230e8d2de16528f88e84fe4b9b664929bfef3" Dec 03 00:15:57 crc kubenswrapper[3561]: E1203 00:15:57.795575 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9638132d1625507bf3df34a9f12230e8d2de16528f88e84fe4b9b664929bfef3\": container with ID starting with 9638132d1625507bf3df34a9f12230e8d2de16528f88e84fe4b9b664929bfef3 not found: ID does not exist" containerID="9638132d1625507bf3df34a9f12230e8d2de16528f88e84fe4b9b664929bfef3" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.795623 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9638132d1625507bf3df34a9f12230e8d2de16528f88e84fe4b9b664929bfef3"} err="failed to get container status \"9638132d1625507bf3df34a9f12230e8d2de16528f88e84fe4b9b664929bfef3\": rpc error: code = NotFound desc = could not find container \"9638132d1625507bf3df34a9f12230e8d2de16528f88e84fe4b9b664929bfef3\": container with ID starting with 9638132d1625507bf3df34a9f12230e8d2de16528f88e84fe4b9b664929bfef3 not found: ID does not exist" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.795635 3561 scope.go:117] "RemoveContainer" containerID="a96615dc8e87c2cef2cf079ec058cfe28877ea716ecf26bb099234d80853ff0a" Dec 03 00:15:57 crc kubenswrapper[3561]: E1203 00:15:57.796075 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a96615dc8e87c2cef2cf079ec058cfe28877ea716ecf26bb099234d80853ff0a\": container with ID starting with a96615dc8e87c2cef2cf079ec058cfe28877ea716ecf26bb099234d80853ff0a not found: ID does not exist" containerID="a96615dc8e87c2cef2cf079ec058cfe28877ea716ecf26bb099234d80853ff0a" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.796222 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a96615dc8e87c2cef2cf079ec058cfe28877ea716ecf26bb099234d80853ff0a"} err="failed to get container status \"a96615dc8e87c2cef2cf079ec058cfe28877ea716ecf26bb099234d80853ff0a\": rpc error: code = NotFound desc = could not find container \"a96615dc8e87c2cef2cf079ec058cfe28877ea716ecf26bb099234d80853ff0a\": container with ID starting with a96615dc8e87c2cef2cf079ec058cfe28877ea716ecf26bb099234d80853ff0a not found: ID does not exist" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.796249 3561 scope.go:117] "RemoveContainer" containerID="f978dba5e3e5c0aff07b26d0a8059f01e5fd7ca22a8eef0dd99560149ac353d9" Dec 03 00:15:57 crc kubenswrapper[3561]: E1203 00:15:57.796854 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f978dba5e3e5c0aff07b26d0a8059f01e5fd7ca22a8eef0dd99560149ac353d9\": container with ID starting with f978dba5e3e5c0aff07b26d0a8059f01e5fd7ca22a8eef0dd99560149ac353d9 not found: ID does not exist" containerID="f978dba5e3e5c0aff07b26d0a8059f01e5fd7ca22a8eef0dd99560149ac353d9" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.796900 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f978dba5e3e5c0aff07b26d0a8059f01e5fd7ca22a8eef0dd99560149ac353d9"} err="failed to get container status \"f978dba5e3e5c0aff07b26d0a8059f01e5fd7ca22a8eef0dd99560149ac353d9\": rpc error: code = NotFound desc = could not find container \"f978dba5e3e5c0aff07b26d0a8059f01e5fd7ca22a8eef0dd99560149ac353d9\": container with ID starting with f978dba5e3e5c0aff07b26d0a8059f01e5fd7ca22a8eef0dd99560149ac353d9 not found: ID does not exist" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.796929 3561 scope.go:117] "RemoveContainer" containerID="7b44fa73d3f68543213d024b92ab7ce7fb7d65d0107f504404461c11722595b2" Dec 03 00:15:57 crc kubenswrapper[3561]: E1203 00:15:57.797466 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b44fa73d3f68543213d024b92ab7ce7fb7d65d0107f504404461c11722595b2\": container with ID starting with 7b44fa73d3f68543213d024b92ab7ce7fb7d65d0107f504404461c11722595b2 not found: ID does not exist" containerID="7b44fa73d3f68543213d024b92ab7ce7fb7d65d0107f504404461c11722595b2" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.797605 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b44fa73d3f68543213d024b92ab7ce7fb7d65d0107f504404461c11722595b2"} err="failed to get container status \"7b44fa73d3f68543213d024b92ab7ce7fb7d65d0107f504404461c11722595b2\": rpc error: code = NotFound desc = could not find container \"7b44fa73d3f68543213d024b92ab7ce7fb7d65d0107f504404461c11722595b2\": container with ID starting with 7b44fa73d3f68543213d024b92ab7ce7fb7d65d0107f504404461c11722595b2 not found: ID does not exist" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.797635 3561 scope.go:117] "RemoveContainer" containerID="299136b4947012b9172489c064874bf7603c2d89776eb9145340e858fe47c952" Dec 03 00:15:57 crc kubenswrapper[3561]: E1203 00:15:57.798381 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"299136b4947012b9172489c064874bf7603c2d89776eb9145340e858fe47c952\": container with ID starting with 299136b4947012b9172489c064874bf7603c2d89776eb9145340e858fe47c952 not found: ID does not exist" containerID="299136b4947012b9172489c064874bf7603c2d89776eb9145340e858fe47c952" Dec 03 00:15:57 crc kubenswrapper[3561]: I1203 00:15:57.798439 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"299136b4947012b9172489c064874bf7603c2d89776eb9145340e858fe47c952"} err="failed to get container status \"299136b4947012b9172489c064874bf7603c2d89776eb9145340e858fe47c952\": rpc error: code = NotFound desc = could not find container \"299136b4947012b9172489c064874bf7603c2d89776eb9145340e858fe47c952\": container with ID starting with 299136b4947012b9172489c064874bf7603c2d89776eb9145340e858fe47c952 not found: ID does not exist" Dec 03 00:15:58 crc kubenswrapper[3561]: E1203 00:15:58.050359 3561 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:58 crc kubenswrapper[3561]: E1203 00:15:58.051116 3561 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:58 crc kubenswrapper[3561]: E1203 00:15:58.051903 3561 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:58 crc kubenswrapper[3561]: E1203 00:15:58.052451 3561 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:58 crc kubenswrapper[3561]: E1203 00:15:58.053120 3561 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:58 crc kubenswrapper[3561]: I1203 00:15:58.053182 3561 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 03 00:15:58 crc kubenswrapper[3561]: E1203 00:15:58.053769 3561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="200ms" Dec 03 00:15:58 crc kubenswrapper[3561]: E1203 00:15:58.254820 3561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="400ms" Dec 03 00:15:58 crc kubenswrapper[3561]: E1203 00:15:58.447363 3561 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:58 crc kubenswrapper[3561]: E1203 00:15:58.447533 3561 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:58 crc kubenswrapper[3561]: E1203 00:15:58.447694 3561 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:58 crc kubenswrapper[3561]: E1203 00:15:58.447818 3561 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:58 crc kubenswrapper[3561]: E1203 00:15:58.447945 3561 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:15:58 crc kubenswrapper[3561]: E1203 00:15:58.447974 3561 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:15:58 crc kubenswrapper[3561]: E1203 00:15:58.655792 3561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="800ms" Dec 03 00:15:59 crc kubenswrapper[3561]: E1203 00:15:59.457135 3561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="1.6s" Dec 03 00:16:01 crc kubenswrapper[3561]: E1203 00:16:01.058810 3561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="3.2s" Dec 03 00:16:01 crc kubenswrapper[3561]: I1203 00:16:01.666118 3561 status_manager.go:853] "Failed to get status for pod" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:01 crc kubenswrapper[3561]: I1203 00:16:01.666458 3561 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:04 crc kubenswrapper[3561]: E1203 00:16:04.261005 3561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="6.4s" Dec 03 00:16:04 crc kubenswrapper[3561]: I1203 00:16:04.409480 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" containerName="registry" containerID="cri-o://b83cd103236d01d38fc2aa5c593883b30fde4f3fc27c3f90b045d52f47698a34" gracePeriod=30 Dec 03 00:16:04 crc kubenswrapper[3561]: I1203 00:16:04.608893 3561 generic.go:334] "Generic (PLEG): container finished" podID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" containerID="b83cd103236d01d38fc2aa5c593883b30fde4f3fc27c3f90b045d52f47698a34" exitCode=0 Dec 03 00:16:04 crc kubenswrapper[3561]: I1203 00:16:04.608945 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerDied","Data":"b83cd103236d01d38fc2aa5c593883b30fde4f3fc27c3f90b045d52f47698a34"} Dec 03 00:16:04 crc kubenswrapper[3561]: I1203 00:16:04.860532 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:16:04 crc kubenswrapper[3561]: I1203 00:16:04.861936 3561 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:04 crc kubenswrapper[3561]: I1203 00:16:04.863523 3561 status_manager.go:853] "Failed to get status for pod" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-75779c45fd-v2j2v\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:04 crc kubenswrapper[3561]: I1203 00:16:04.864172 3561 status_manager.go:853] "Failed to get status for pod" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:04 crc kubenswrapper[3561]: E1203 00:16:04.925502 3561 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.159:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d8c5befbc2a1c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:7dae59545f22b3fb679a7fbf878a6379,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:15:55.491101212 +0000 UTC m=+554.271535510,LastTimestamp:2025-12-03 00:15:55.491101212 +0000 UTC m=+554.271535510,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.059858 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.059928 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.059969 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.060022 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.060081 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.060119 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.060151 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.060284 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.061701 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.062060 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.062244 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.078967 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.079013 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.079926 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.079994 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv" (OuterVolumeSpecName: "kube-api-access-scpwv") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "kube-api-access-scpwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.088367 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (OuterVolumeSpecName: "registry-storage") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.161019 3561 reconciler_common.go:300] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.161056 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") on node \"crc\" DevicePath \"\"" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.161069 3561 reconciler_common.go:300] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.161081 3561 reconciler_common.go:300] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.161090 3561 reconciler_common.go:300] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.161100 3561 reconciler_common.go:300] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.161110 3561 reconciler_common.go:300] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.618584 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerDied","Data":"4e38c0efffd2f4b7d6016e17c90aa1b8c0441ce4ee182704bc39c3f7e1481e75"} Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.618923 3561 scope.go:117] "RemoveContainer" containerID="b83cd103236d01d38fc2aa5c593883b30fde4f3fc27c3f90b045d52f47698a34" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.618632 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.619588 3561 status_manager.go:853] "Failed to get status for pod" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.619927 3561 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.620254 3561 status_manager.go:853] "Failed to get status for pod" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-75779c45fd-v2j2v\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.642305 3561 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.642719 3561 status_manager.go:853] "Failed to get status for pod" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-75779c45fd-v2j2v\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:05 crc kubenswrapper[3561]: I1203 00:16:05.643062 3561 status_manager.go:853] "Failed to get status for pod" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:08 crc kubenswrapper[3561]: E1203 00:16:08.799037 3561 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:08 crc kubenswrapper[3561]: E1203 00:16:08.799604 3561 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:08 crc kubenswrapper[3561]: E1203 00:16:08.800134 3561 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:08 crc kubenswrapper[3561]: E1203 00:16:08.800608 3561 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:08 crc kubenswrapper[3561]: E1203 00:16:08.800985 3561 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:08 crc kubenswrapper[3561]: E1203 00:16:08.801030 3561 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:16:09 crc kubenswrapper[3561]: I1203 00:16:09.654942 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/2.log" Dec 03 00:16:09 crc kubenswrapper[3561]: I1203 00:16:09.655370 3561 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="3170adb4d964bb1b0d4fcefac2050bb117aeab3fbaf35e07671fe5c034d5cf00" exitCode=1 Dec 03 00:16:09 crc kubenswrapper[3561]: I1203 00:16:09.655414 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"3170adb4d964bb1b0d4fcefac2050bb117aeab3fbaf35e07671fe5c034d5cf00"} Dec 03 00:16:09 crc kubenswrapper[3561]: I1203 00:16:09.656119 3561 scope.go:117] "RemoveContainer" containerID="3170adb4d964bb1b0d4fcefac2050bb117aeab3fbaf35e07671fe5c034d5cf00" Dec 03 00:16:09 crc kubenswrapper[3561]: I1203 00:16:09.656783 3561 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:09 crc kubenswrapper[3561]: I1203 00:16:09.657352 3561 status_manager.go:853] "Failed to get status for pod" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-75779c45fd-v2j2v\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:09 crc kubenswrapper[3561]: I1203 00:16:09.657849 3561 status_manager.go:853] "Failed to get status for pod" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:09 crc kubenswrapper[3561]: I1203 00:16:09.658407 3561 status_manager.go:853] "Failed to get status for pod" podUID="bd6a3a59e513625ca0ae3724df2686bc" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:09 crc kubenswrapper[3561]: I1203 00:16:09.809000 3561 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:16:10 crc kubenswrapper[3561]: E1203 00:16:10.662592 3561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="7s" Dec 03 00:16:10 crc kubenswrapper[3561]: I1203 00:16:10.663724 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:16:10 crc kubenswrapper[3561]: I1203 00:16:10.664740 3561 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:10 crc kubenswrapper[3561]: I1203 00:16:10.665388 3561 status_manager.go:853] "Failed to get status for pod" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-75779c45fd-v2j2v\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:10 crc kubenswrapper[3561]: I1203 00:16:10.666401 3561 status_manager.go:853] "Failed to get status for pod" podUID="bd6a3a59e513625ca0ae3724df2686bc" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:10 crc kubenswrapper[3561]: I1203 00:16:10.666822 3561 status_manager.go:853] "Failed to get status for pod" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:10 crc kubenswrapper[3561]: I1203 00:16:10.668116 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/2.log" Dec 03 00:16:10 crc kubenswrapper[3561]: I1203 00:16:10.668204 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"ec33e131d4b5835fae4c8df68c644bb5d1bfc3a694bbc9ae27a411bec4fbc2b4"} Dec 03 00:16:10 crc kubenswrapper[3561]: I1203 00:16:10.682185 3561 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:10 crc kubenswrapper[3561]: I1203 00:16:10.683004 3561 status_manager.go:853] "Failed to get status for pod" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-75779c45fd-v2j2v\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:10 crc kubenswrapper[3561]: I1203 00:16:10.683679 3561 status_manager.go:853] "Failed to get status for pod" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:10 crc kubenswrapper[3561]: I1203 00:16:10.684056 3561 status_manager.go:853] "Failed to get status for pod" podUID="bd6a3a59e513625ca0ae3724df2686bc" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:10 crc kubenswrapper[3561]: I1203 00:16:10.732631 3561 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 03 00:16:10 crc kubenswrapper[3561]: I1203 00:16:10.732679 3561 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 03 00:16:10 crc kubenswrapper[3561]: E1203 00:16:10.733220 3561 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:16:10 crc kubenswrapper[3561]: I1203 00:16:10.734115 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.026853 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.027219 3561 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.027317 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.669340 3561 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.669717 3561 status_manager.go:853] "Failed to get status for pod" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-75779c45fd-v2j2v\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.670301 3561 status_manager.go:853] "Failed to get status for pod" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.670706 3561 status_manager.go:853] "Failed to get status for pod" podUID="bd6a3a59e513625ca0ae3724df2686bc" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.673013 3561 status_manager.go:853] "Failed to get status for pod" podUID="7f3419c3ca30b18b78e8dd2488b00489" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.676647 3561 generic.go:334] "Generic (PLEG): container finished" podID="7f3419c3ca30b18b78e8dd2488b00489" containerID="5e6d7f114b66ee10002b0a161422dd41597b06f70fe6401653f107e88aba2e57" exitCode=0 Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.680451 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerDied","Data":"5e6d7f114b66ee10002b0a161422dd41597b06f70fe6401653f107e88aba2e57"} Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.680496 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"50b0531bcf06edc7b8a82adfdfebbb6d8cb302f4f1a294bdc3758b3f0f9fcd84"} Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.680865 3561 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.680890 3561 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.681560 3561 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:11 crc kubenswrapper[3561]: E1203 00:16:11.681731 3561 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.681770 3561 status_manager.go:853] "Failed to get status for pod" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-75779c45fd-v2j2v\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.682022 3561 status_manager.go:853] "Failed to get status for pod" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.682208 3561 status_manager.go:853] "Failed to get status for pod" podUID="bd6a3a59e513625ca0ae3724df2686bc" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:11 crc kubenswrapper[3561]: I1203 00:16:11.682391 3561 status_manager.go:853] "Failed to get status for pod" podUID="7f3419c3ca30b18b78e8dd2488b00489" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Dec 03 00:16:12 crc kubenswrapper[3561]: I1203 00:16:12.690874 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"5f04b65a9e253fb30740d0e4bcead0eeb6be3ad9aacf99924ebaf09cec3098f8"} Dec 03 00:16:12 crc kubenswrapper[3561]: I1203 00:16:12.691240 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"2549aee9db66b789c28c2e4fd05568ef57972e857db651012006349179870b1a"} Dec 03 00:16:12 crc kubenswrapper[3561]: I1203 00:16:12.691256 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"30bf0a824efe7b1364bc9578d3e2d6663732ed857ff8fdfeea2f0d972e8bdac4"} Dec 03 00:16:13 crc kubenswrapper[3561]: I1203 00:16:13.699780 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"d428cc12a45480ce5e66a47a690f91b61a7c336e6152a551dbd1d70c55ab9c1a"} Dec 03 00:16:13 crc kubenswrapper[3561]: I1203 00:16:13.700073 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"724275e01d4936550b290415e4962698d0b003bfb036684d273a28866a3a1aa5"} Dec 03 00:16:13 crc kubenswrapper[3561]: I1203 00:16:13.700257 3561 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 03 00:16:13 crc kubenswrapper[3561]: I1203 00:16:13.700282 3561 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 03 00:16:15 crc kubenswrapper[3561]: I1203 00:16:15.734768 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:16:15 crc kubenswrapper[3561]: I1203 00:16:15.735695 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:16:15 crc kubenswrapper[3561]: I1203 00:16:15.740736 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:16:16 crc kubenswrapper[3561]: I1203 00:16:16.268117 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:16:18 crc kubenswrapper[3561]: I1203 00:16:18.721739 3561 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:16:18 crc kubenswrapper[3561]: I1203 00:16:18.779254 3561 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b73e61-d8d2-4892-8a19-005929c9d4e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:16:11Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:16:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:16:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30bf0a824efe7b1364bc9578d3e2d6663732ed857ff8fdfeea2f0d972e8bdac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:16:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f04b65a9e253fb30740d0e4bcead0eeb6be3ad9aacf99924ebaf09cec3098f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:16:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2549aee9db66b789c28c2e4fd05568ef57972e857db651012006349179870b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:16:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d428cc12a45480ce5e66a47a690f91b61a7c336e6152a551dbd1d70c55ab9c1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:16:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://724275e01d4936550b290415e4962698d0b003bfb036684d273a28866a3a1aa5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:16:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6d7f114b66ee10002b0a161422dd41597b06f70fe6401653f107e88aba2e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e6d7f114b66ee10002b0a161422dd41597b06f70fe6401653f107e88aba2e57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:16:11Z\\\"}}}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"d1b73e61-d8d2-4892-8a19-005929c9d4e1\": field is immutable" Dec 03 00:16:18 crc kubenswrapper[3561]: I1203 00:16:18.873084 3561 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="7f3419c3ca30b18b78e8dd2488b00489" podUID="a1f2b672-d932-427b-b0ee-ded3d2c6a272" Dec 03 00:16:19 crc kubenswrapper[3561]: I1203 00:16:19.732094 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:16:19 crc kubenswrapper[3561]: I1203 00:16:19.732313 3561 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 03 00:16:19 crc kubenswrapper[3561]: I1203 00:16:19.732346 3561 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 03 00:16:19 crc kubenswrapper[3561]: I1203 00:16:19.738253 3561 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="7f3419c3ca30b18b78e8dd2488b00489" podUID="a1f2b672-d932-427b-b0ee-ded3d2c6a272" Dec 03 00:16:20 crc kubenswrapper[3561]: I1203 00:16:20.736977 3561 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 03 00:16:20 crc kubenswrapper[3561]: I1203 00:16:20.737006 3561 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 03 00:16:20 crc kubenswrapper[3561]: I1203 00:16:20.741798 3561 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="7f3419c3ca30b18b78e8dd2488b00489" podUID="a1f2b672-d932-427b-b0ee-ded3d2c6a272" Dec 03 00:16:21 crc kubenswrapper[3561]: I1203 00:16:21.026869 3561 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 03 00:16:21 crc kubenswrapper[3561]: I1203 00:16:21.027003 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 03 00:16:27 crc kubenswrapper[3561]: I1203 00:16:27.623446 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:16:27 crc kubenswrapper[3561]: I1203 00:16:27.624191 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:16:28 crc kubenswrapper[3561]: I1203 00:16:28.842649 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 00:16:28 crc kubenswrapper[3561]: I1203 00:16:28.925229 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 00:16:29 crc kubenswrapper[3561]: I1203 00:16:29.342311 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 00:16:29 crc kubenswrapper[3561]: I1203 00:16:29.862122 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 00:16:30 crc kubenswrapper[3561]: I1203 00:16:30.109131 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 00:16:30 crc kubenswrapper[3561]: I1203 00:16:30.162060 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 00:16:30 crc kubenswrapper[3561]: I1203 00:16:30.283043 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Dec 03 00:16:30 crc kubenswrapper[3561]: I1203 00:16:30.284275 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 00:16:30 crc kubenswrapper[3561]: I1203 00:16:30.609940 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 00:16:30 crc kubenswrapper[3561]: I1203 00:16:30.680737 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 00:16:30 crc kubenswrapper[3561]: I1203 00:16:30.790927 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 00:16:30 crc kubenswrapper[3561]: I1203 00:16:30.799137 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 00:16:30 crc kubenswrapper[3561]: I1203 00:16:30.819139 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 00:16:30 crc kubenswrapper[3561]: I1203 00:16:30.970423 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 00:16:30 crc kubenswrapper[3561]: I1203 00:16:30.970665 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 03 00:16:30 crc kubenswrapper[3561]: I1203 00:16:30.992807 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 00:16:31 crc kubenswrapper[3561]: I1203 00:16:31.034145 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:16:31 crc kubenswrapper[3561]: I1203 00:16:31.042089 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:16:31 crc kubenswrapper[3561]: I1203 00:16:31.427594 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 00:16:31 crc kubenswrapper[3561]: I1203 00:16:31.566311 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 00:16:31 crc kubenswrapper[3561]: I1203 00:16:31.650984 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 00:16:31 crc kubenswrapper[3561]: I1203 00:16:31.668308 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 00:16:31 crc kubenswrapper[3561]: I1203 00:16:31.671743 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 00:16:31 crc kubenswrapper[3561]: I1203 00:16:31.954958 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 00:16:32 crc kubenswrapper[3561]: I1203 00:16:32.029980 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 00:16:32 crc kubenswrapper[3561]: I1203 00:16:32.043483 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 00:16:32 crc kubenswrapper[3561]: I1203 00:16:32.172806 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 00:16:32 crc kubenswrapper[3561]: I1203 00:16:32.310942 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Dec 03 00:16:32 crc kubenswrapper[3561]: I1203 00:16:32.391985 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 00:16:32 crc kubenswrapper[3561]: I1203 00:16:32.447754 3561 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 03 00:16:32 crc kubenswrapper[3561]: I1203 00:16:32.523791 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 00:16:32 crc kubenswrapper[3561]: I1203 00:16:32.524304 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 00:16:32 crc kubenswrapper[3561]: I1203 00:16:32.797049 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 00:16:32 crc kubenswrapper[3561]: I1203 00:16:32.807966 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-b4zbk" Dec 03 00:16:32 crc kubenswrapper[3561]: I1203 00:16:32.913958 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 00:16:32 crc kubenswrapper[3561]: I1203 00:16:32.937063 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 00:16:32 crc kubenswrapper[3561]: I1203 00:16:32.968858 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 00:16:32 crc kubenswrapper[3561]: I1203 00:16:32.991150 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 00:16:33 crc kubenswrapper[3561]: I1203 00:16:33.183355 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 00:16:33 crc kubenswrapper[3561]: I1203 00:16:33.231757 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 00:16:33 crc kubenswrapper[3561]: I1203 00:16:33.309252 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 00:16:33 crc kubenswrapper[3561]: I1203 00:16:33.366434 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 00:16:33 crc kubenswrapper[3561]: I1203 00:16:33.404106 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 00:16:33 crc kubenswrapper[3561]: I1203 00:16:33.428293 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 00:16:33 crc kubenswrapper[3561]: I1203 00:16:33.437225 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 00:16:33 crc kubenswrapper[3561]: I1203 00:16:33.460454 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 00:16:33 crc kubenswrapper[3561]: I1203 00:16:33.492098 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 00:16:33 crc kubenswrapper[3561]: I1203 00:16:33.593977 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 00:16:33 crc kubenswrapper[3561]: I1203 00:16:33.725070 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 00:16:33 crc kubenswrapper[3561]: I1203 00:16:33.794181 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 00:16:33 crc kubenswrapper[3561]: I1203 00:16:33.844463 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 00:16:33 crc kubenswrapper[3561]: I1203 00:16:33.894427 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 00:16:33 crc kubenswrapper[3561]: I1203 00:16:33.945505 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.019847 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.092012 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.157937 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.170437 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.172794 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.234121 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.234424 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.325128 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.417605 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.500498 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.526430 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.576163 3561 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.609282 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.753011 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.792286 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.812151 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.845871 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.894295 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.896260 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.898761 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 00:16:34 crc kubenswrapper[3561]: I1203 00:16:34.902988 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 00:16:35 crc kubenswrapper[3561]: I1203 00:16:35.121059 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 03 00:16:35 crc kubenswrapper[3561]: I1203 00:16:35.178082 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 00:16:35 crc kubenswrapper[3561]: I1203 00:16:35.206904 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 00:16:35 crc kubenswrapper[3561]: I1203 00:16:35.214368 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 00:16:35 crc kubenswrapper[3561]: I1203 00:16:35.217943 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 00:16:35 crc kubenswrapper[3561]: I1203 00:16:35.295525 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 00:16:35 crc kubenswrapper[3561]: I1203 00:16:35.298551 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 00:16:35 crc kubenswrapper[3561]: I1203 00:16:35.364705 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 00:16:35 crc kubenswrapper[3561]: I1203 00:16:35.462128 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 00:16:35 crc kubenswrapper[3561]: I1203 00:16:35.657468 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 00:16:35 crc kubenswrapper[3561]: I1203 00:16:35.686794 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 00:16:35 crc kubenswrapper[3561]: I1203 00:16:35.802297 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 00:16:35 crc kubenswrapper[3561]: I1203 00:16:35.910845 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 00:16:35 crc kubenswrapper[3561]: I1203 00:16:35.925258 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 00:16:35 crc kubenswrapper[3561]: I1203 00:16:35.976431 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 00:16:35 crc kubenswrapper[3561]: I1203 00:16:35.990092 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.067674 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.073775 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.090894 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.133681 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.213447 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.252872 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.263018 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.365071 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-ng44q" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.368257 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.506780 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.514271 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.680371 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.680485 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.716951 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.738872 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.789908 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.868501 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.975010 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 00:16:36 crc kubenswrapper[3561]: I1203 00:16:36.978930 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 00:16:37 crc kubenswrapper[3561]: I1203 00:16:37.012253 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Dec 03 00:16:37 crc kubenswrapper[3561]: I1203 00:16:37.042253 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 00:16:37 crc kubenswrapper[3561]: I1203 00:16:37.051639 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 00:16:37 crc kubenswrapper[3561]: I1203 00:16:37.058011 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 03 00:16:37 crc kubenswrapper[3561]: I1203 00:16:37.218662 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 00:16:37 crc kubenswrapper[3561]: I1203 00:16:37.220976 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 00:16:37 crc kubenswrapper[3561]: I1203 00:16:37.226230 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 00:16:37 crc kubenswrapper[3561]: I1203 00:16:37.284085 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 00:16:37 crc kubenswrapper[3561]: I1203 00:16:37.373281 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6sd5l" Dec 03 00:16:37 crc kubenswrapper[3561]: I1203 00:16:37.374809 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 00:16:37 crc kubenswrapper[3561]: I1203 00:16:37.423488 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 00:16:37 crc kubenswrapper[3561]: I1203 00:16:37.457279 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 00:16:37 crc kubenswrapper[3561]: I1203 00:16:37.811783 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 00:16:37 crc kubenswrapper[3561]: I1203 00:16:37.956762 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 00:16:37 crc kubenswrapper[3561]: I1203 00:16:37.957641 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 00:16:37 crc kubenswrapper[3561]: I1203 00:16:37.970233 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 00:16:38 crc kubenswrapper[3561]: I1203 00:16:38.046362 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 00:16:38 crc kubenswrapper[3561]: I1203 00:16:38.126760 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 00:16:38 crc kubenswrapper[3561]: I1203 00:16:38.210267 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 00:16:38 crc kubenswrapper[3561]: I1203 00:16:38.258284 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 00:16:38 crc kubenswrapper[3561]: I1203 00:16:38.299360 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 00:16:38 crc kubenswrapper[3561]: I1203 00:16:38.354098 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 00:16:38 crc kubenswrapper[3561]: I1203 00:16:38.407932 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 00:16:38 crc kubenswrapper[3561]: I1203 00:16:38.580151 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 00:16:38 crc kubenswrapper[3561]: I1203 00:16:38.581423 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 00:16:38 crc kubenswrapper[3561]: I1203 00:16:38.641676 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 00:16:38 crc kubenswrapper[3561]: I1203 00:16:38.687912 3561 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 03 00:16:38 crc kubenswrapper[3561]: I1203 00:16:38.812638 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 00:16:38 crc kubenswrapper[3561]: I1203 00:16:38.819339 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 00:16:38 crc kubenswrapper[3561]: I1203 00:16:38.885242 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 00:16:38 crc kubenswrapper[3561]: I1203 00:16:38.903316 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.052315 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.089621 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.215052 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.223199 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.285319 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.397215 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.485622 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.566919 3561 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.567789 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.567680879 podStartE2EDuration="44.567680879s" podCreationTimestamp="2025-12-03 00:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:16:18.788418323 +0000 UTC m=+577.568852581" watchObservedRunningTime="2025-12-03 00:16:39.567680879 +0000 UTC m=+598.348115177" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.573446 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-75779c45fd-v2j2v","openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.573509 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.578505 3561 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.580108 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.582785 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.601519 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.601465603 podStartE2EDuration="21.601465603s" podCreationTimestamp="2025-12-03 00:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:16:39.59819068 +0000 UTC m=+598.378624948" watchObservedRunningTime="2025-12-03 00:16:39.601465603 +0000 UTC m=+598.381899861" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.627051 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.670885 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" path="/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.719327 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.726580 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.747964 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.781147 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.954377 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 00:16:39 crc kubenswrapper[3561]: I1203 00:16:39.961092 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 00:16:40 crc kubenswrapper[3561]: I1203 00:16:40.297781 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 00:16:40 crc kubenswrapper[3561]: I1203 00:16:40.297968 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 00:16:40 crc kubenswrapper[3561]: I1203 00:16:40.298139 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 00:16:40 crc kubenswrapper[3561]: I1203 00:16:40.412451 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Dec 03 00:16:40 crc kubenswrapper[3561]: I1203 00:16:40.494322 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 00:16:40 crc kubenswrapper[3561]: I1203 00:16:40.522619 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 00:16:40 crc kubenswrapper[3561]: I1203 00:16:40.754056 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 00:16:40 crc kubenswrapper[3561]: I1203 00:16:40.817577 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 03 00:16:40 crc kubenswrapper[3561]: I1203 00:16:40.966607 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.027030 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.109961 3561 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.110209 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="7dae59545f22b3fb679a7fbf878a6379" containerName="startup-monitor" containerID="cri-o://09842c0ee81950ed48dfbf34429bd12dc06a9d0f9c328e4e6049a4803f9cb055" gracePeriod=5 Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.238900 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.327713 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.403641 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.493460 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.513590 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.513681 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.513764 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.513793 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.513829 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.611089 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.729580 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.732307 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 00:16:41 crc kubenswrapper[3561]: E1203 00:16:41.900881 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53\": container with ID starting with dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53 not found: ID does not exist" containerID="dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.900962 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53" err="rpc error: code = NotFound desc = could not find container \"dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53\": container with ID starting with dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53 not found: ID does not exist" Dec 03 00:16:41 crc kubenswrapper[3561]: E1203 00:16:41.903070 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807\": container with ID starting with 53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807 not found: ID does not exist" containerID="53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.903182 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807" err="rpc error: code = NotFound desc = could not find container \"53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807\": container with ID starting with 53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807 not found: ID does not exist" Dec 03 00:16:41 crc kubenswrapper[3561]: E1203 00:16:41.904208 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"541d2c55015dd9598833b9963af2b6381cb6ee6b4d7dfc71628357dfa5061309\": container with ID starting with 541d2c55015dd9598833b9963af2b6381cb6ee6b4d7dfc71628357dfa5061309 not found: ID does not exist" containerID="541d2c55015dd9598833b9963af2b6381cb6ee6b4d7dfc71628357dfa5061309" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.904260 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="541d2c55015dd9598833b9963af2b6381cb6ee6b4d7dfc71628357dfa5061309" err="rpc error: code = NotFound desc = could not find container \"541d2c55015dd9598833b9963af2b6381cb6ee6b4d7dfc71628357dfa5061309\": container with ID starting with 541d2c55015dd9598833b9963af2b6381cb6ee6b4d7dfc71628357dfa5061309 not found: ID does not exist" Dec 03 00:16:41 crc kubenswrapper[3561]: E1203 00:16:41.904868 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78cb312fe6c14e0be87fca6e1a4b453d849031a170e3099a748a5dc1734bce20\": container with ID starting with 78cb312fe6c14e0be87fca6e1a4b453d849031a170e3099a748a5dc1734bce20 not found: ID does not exist" containerID="78cb312fe6c14e0be87fca6e1a4b453d849031a170e3099a748a5dc1734bce20" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.904953 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="78cb312fe6c14e0be87fca6e1a4b453d849031a170e3099a748a5dc1734bce20" err="rpc error: code = NotFound desc = could not find container \"78cb312fe6c14e0be87fca6e1a4b453d849031a170e3099a748a5dc1734bce20\": container with ID starting with 78cb312fe6c14e0be87fca6e1a4b453d849031a170e3099a748a5dc1734bce20 not found: ID does not exist" Dec 03 00:16:41 crc kubenswrapper[3561]: E1203 00:16:41.905415 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078\": container with ID starting with 8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078 not found: ID does not exist" containerID="8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.905464 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078" err="rpc error: code = NotFound desc = could not find container \"8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078\": container with ID starting with 8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078 not found: ID does not exist" Dec 03 00:16:41 crc kubenswrapper[3561]: E1203 00:16:41.906302 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"557fc6cd7e0f29b886d47058e8b38fb7651b704349348a199e472472afd9f559\": container with ID starting with 557fc6cd7e0f29b886d47058e8b38fb7651b704349348a199e472472afd9f559 not found: ID does not exist" containerID="557fc6cd7e0f29b886d47058e8b38fb7651b704349348a199e472472afd9f559" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.906348 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="557fc6cd7e0f29b886d47058e8b38fb7651b704349348a199e472472afd9f559" err="rpc error: code = NotFound desc = could not find container \"557fc6cd7e0f29b886d47058e8b38fb7651b704349348a199e472472afd9f559\": container with ID starting with 557fc6cd7e0f29b886d47058e8b38fb7651b704349348a199e472472afd9f559 not found: ID does not exist" Dec 03 00:16:41 crc kubenswrapper[3561]: E1203 00:16:41.906837 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282\": container with ID starting with a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282 not found: ID does not exist" containerID="a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.906918 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282" err="rpc error: code = NotFound desc = could not find container \"a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282\": container with ID starting with a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282 not found: ID does not exist" Dec 03 00:16:41 crc kubenswrapper[3561]: E1203 00:16:41.907813 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b7d1a71fb42c2b734b2de91015cb7a197949c7b4aca2f3eade37c41d44c78c3\": container with ID starting with 8b7d1a71fb42c2b734b2de91015cb7a197949c7b4aca2f3eade37c41d44c78c3 not found: ID does not exist" containerID="8b7d1a71fb42c2b734b2de91015cb7a197949c7b4aca2f3eade37c41d44c78c3" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.907846 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="8b7d1a71fb42c2b734b2de91015cb7a197949c7b4aca2f3eade37c41d44c78c3" err="rpc error: code = NotFound desc = could not find container \"8b7d1a71fb42c2b734b2de91015cb7a197949c7b4aca2f3eade37c41d44c78c3\": container with ID starting with 8b7d1a71fb42c2b734b2de91015cb7a197949c7b4aca2f3eade37c41d44c78c3 not found: ID does not exist" Dec 03 00:16:41 crc kubenswrapper[3561]: E1203 00:16:41.908467 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b20d9e3247e3ca216c562c550c7a83a115ba0a89b5d1d090a2aa032014db1011\": container with ID starting with b20d9e3247e3ca216c562c550c7a83a115ba0a89b5d1d090a2aa032014db1011 not found: ID does not exist" containerID="b20d9e3247e3ca216c562c550c7a83a115ba0a89b5d1d090a2aa032014db1011" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.908531 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="b20d9e3247e3ca216c562c550c7a83a115ba0a89b5d1d090a2aa032014db1011" err="rpc error: code = NotFound desc = could not find container \"b20d9e3247e3ca216c562c550c7a83a115ba0a89b5d1d090a2aa032014db1011\": container with ID starting with b20d9e3247e3ca216c562c550c7a83a115ba0a89b5d1d090a2aa032014db1011 not found: ID does not exist" Dec 03 00:16:41 crc kubenswrapper[3561]: E1203 00:16:41.909239 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333\": container with ID starting with ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333 not found: ID does not exist" containerID="ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.909275 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333" err="rpc error: code = NotFound desc = could not find container \"ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333\": container with ID starting with ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333 not found: ID does not exist" Dec 03 00:16:41 crc kubenswrapper[3561]: E1203 00:16:41.909840 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3\": container with ID starting with caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3 not found: ID does not exist" containerID="caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.909869 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3" err="rpc error: code = NotFound desc = could not find container \"caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3\": container with ID starting with caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3 not found: ID does not exist" Dec 03 00:16:41 crc kubenswrapper[3561]: E1203 00:16:41.910397 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d50b8eb29a7c52ebdea06aab6550fa8d58962de54813379bf63765c09422ebd8\": container with ID starting with d50b8eb29a7c52ebdea06aab6550fa8d58962de54813379bf63765c09422ebd8 not found: ID does not exist" containerID="d50b8eb29a7c52ebdea06aab6550fa8d58962de54813379bf63765c09422ebd8" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.910436 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="d50b8eb29a7c52ebdea06aab6550fa8d58962de54813379bf63765c09422ebd8" err="rpc error: code = NotFound desc = could not find container \"d50b8eb29a7c52ebdea06aab6550fa8d58962de54813379bf63765c09422ebd8\": container with ID starting with d50b8eb29a7c52ebdea06aab6550fa8d58962de54813379bf63765c09422ebd8 not found: ID does not exist" Dec 03 00:16:41 crc kubenswrapper[3561]: E1203 00:16:41.910975 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0\": container with ID starting with 05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0 not found: ID does not exist" containerID="05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.911007 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0" err="rpc error: code = NotFound desc = could not find container \"05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0\": container with ID starting with 05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0 not found: ID does not exist" Dec 03 00:16:41 crc kubenswrapper[3561]: E1203 00:16:41.911456 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bb1000c58bc7c4eea005309467de4797fb76ca9de00e270acf1bc87f9b83c45\": container with ID starting with 9bb1000c58bc7c4eea005309467de4797fb76ca9de00e270acf1bc87f9b83c45 not found: ID does not exist" containerID="9bb1000c58bc7c4eea005309467de4797fb76ca9de00e270acf1bc87f9b83c45" Dec 03 00:16:41 crc kubenswrapper[3561]: I1203 00:16:41.911482 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="9bb1000c58bc7c4eea005309467de4797fb76ca9de00e270acf1bc87f9b83c45" err="rpc error: code = NotFound desc = could not find container \"9bb1000c58bc7c4eea005309467de4797fb76ca9de00e270acf1bc87f9b83c45\": container with ID starting with 9bb1000c58bc7c4eea005309467de4797fb76ca9de00e270acf1bc87f9b83c45 not found: ID does not exist" Dec 03 00:16:42 crc kubenswrapper[3561]: I1203 00:16:42.018155 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 00:16:42 crc kubenswrapper[3561]: I1203 00:16:42.040835 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 00:16:42 crc kubenswrapper[3561]: I1203 00:16:42.122698 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 00:16:42 crc kubenswrapper[3561]: I1203 00:16:42.152895 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 00:16:42 crc kubenswrapper[3561]: I1203 00:16:42.186735 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 00:16:42 crc kubenswrapper[3561]: I1203 00:16:42.312069 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 00:16:42 crc kubenswrapper[3561]: I1203 00:16:42.409427 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 00:16:42 crc kubenswrapper[3561]: I1203 00:16:42.435942 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 00:16:42 crc kubenswrapper[3561]: I1203 00:16:42.678441 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Dec 03 00:16:42 crc kubenswrapper[3561]: I1203 00:16:42.801866 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 00:16:42 crc kubenswrapper[3561]: I1203 00:16:42.928399 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 00:16:43 crc kubenswrapper[3561]: I1203 00:16:43.140128 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 00:16:43 crc kubenswrapper[3561]: I1203 00:16:43.521158 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 00:16:43 crc kubenswrapper[3561]: I1203 00:16:43.529341 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 00:16:43 crc kubenswrapper[3561]: I1203 00:16:43.541381 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Dec 03 00:16:43 crc kubenswrapper[3561]: I1203 00:16:43.681180 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 00:16:43 crc kubenswrapper[3561]: I1203 00:16:43.807448 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 00:16:43 crc kubenswrapper[3561]: I1203 00:16:43.867904 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 00:16:43 crc kubenswrapper[3561]: I1203 00:16:43.904901 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 00:16:43 crc kubenswrapper[3561]: I1203 00:16:43.928603 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 00:16:44 crc kubenswrapper[3561]: I1203 00:16:44.102661 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.246733 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7dae59545f22b3fb679a7fbf878a6379/startup-monitor/0.log" Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.247035 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.349863 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log\") pod \"7dae59545f22b3fb679a7fbf878a6379\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.349927 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock\") pod \"7dae59545f22b3fb679a7fbf878a6379\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.350013 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir\") pod \"7dae59545f22b3fb679a7fbf878a6379\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.350045 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir\") pod \"7dae59545f22b3fb679a7fbf878a6379\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.350071 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests\") pod \"7dae59545f22b3fb679a7fbf878a6379\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.350073 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock" (OuterVolumeSpecName: "var-lock") pod "7dae59545f22b3fb679a7fbf878a6379" (UID: "7dae59545f22b3fb679a7fbf878a6379"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.350086 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "7dae59545f22b3fb679a7fbf878a6379" (UID: "7dae59545f22b3fb679a7fbf878a6379"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.350246 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests" (OuterVolumeSpecName: "manifests") pod "7dae59545f22b3fb679a7fbf878a6379" (UID: "7dae59545f22b3fb679a7fbf878a6379"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.350273 3561 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock\") on node \"crc\" DevicePath \"\"" Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.350288 3561 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.350860 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log" (OuterVolumeSpecName: "var-log") pod "7dae59545f22b3fb679a7fbf878a6379" (UID: "7dae59545f22b3fb679a7fbf878a6379"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.365802 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "7dae59545f22b3fb679a7fbf878a6379" (UID: "7dae59545f22b3fb679a7fbf878a6379"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.451077 3561 reconciler_common.go:300] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.451114 3561 reconciler_common.go:300] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests\") on node \"crc\" DevicePath \"\"" Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.451126 3561 reconciler_common.go:300] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log\") on node \"crc\" DevicePath \"\"" Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.995409 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7dae59545f22b3fb679a7fbf878a6379/startup-monitor/0.log" Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.995478 3561 generic.go:334] "Generic (PLEG): container finished" podID="7dae59545f22b3fb679a7fbf878a6379" containerID="09842c0ee81950ed48dfbf34429bd12dc06a9d0f9c328e4e6049a4803f9cb055" exitCode=137 Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.995591 3561 scope.go:117] "RemoveContainer" containerID="09842c0ee81950ed48dfbf34429bd12dc06a9d0f9c328e4e6049a4803f9cb055" Dec 03 00:16:46 crc kubenswrapper[3561]: I1203 00:16:46.995645 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:16:47 crc kubenswrapper[3561]: I1203 00:16:47.026591 3561 scope.go:117] "RemoveContainer" containerID="09842c0ee81950ed48dfbf34429bd12dc06a9d0f9c328e4e6049a4803f9cb055" Dec 03 00:16:47 crc kubenswrapper[3561]: E1203 00:16:47.027151 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09842c0ee81950ed48dfbf34429bd12dc06a9d0f9c328e4e6049a4803f9cb055\": container with ID starting with 09842c0ee81950ed48dfbf34429bd12dc06a9d0f9c328e4e6049a4803f9cb055 not found: ID does not exist" containerID="09842c0ee81950ed48dfbf34429bd12dc06a9d0f9c328e4e6049a4803f9cb055" Dec 03 00:16:47 crc kubenswrapper[3561]: I1203 00:16:47.027280 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09842c0ee81950ed48dfbf34429bd12dc06a9d0f9c328e4e6049a4803f9cb055"} err="failed to get container status \"09842c0ee81950ed48dfbf34429bd12dc06a9d0f9c328e4e6049a4803f9cb055\": rpc error: code = NotFound desc = could not find container \"09842c0ee81950ed48dfbf34429bd12dc06a9d0f9c328e4e6049a4803f9cb055\": container with ID starting with 09842c0ee81950ed48dfbf34429bd12dc06a9d0f9c328e4e6049a4803f9cb055 not found: ID does not exist" Dec 03 00:16:47 crc kubenswrapper[3561]: I1203 00:16:47.671877 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dae59545f22b3fb679a7fbf878a6379" path="/var/lib/kubelet/pods/7dae59545f22b3fb679a7fbf878a6379/volumes" Dec 03 00:16:47 crc kubenswrapper[3561]: I1203 00:16:47.672199 3561 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 03 00:16:47 crc kubenswrapper[3561]: I1203 00:16:47.681076 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 00:16:47 crc kubenswrapper[3561]: I1203 00:16:47.681122 3561 kubelet.go:2639] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0a27aca4-2237-4bea-9e84-3979eca52be3" Dec 03 00:16:47 crc kubenswrapper[3561]: I1203 00:16:47.685119 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 00:16:47 crc kubenswrapper[3561]: I1203 00:16:47.685169 3561 kubelet.go:2663] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0a27aca4-2237-4bea-9e84-3979eca52be3" Dec 03 00:16:56 crc kubenswrapper[3561]: I1203 00:16:56.874160 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 00:16:57 crc kubenswrapper[3561]: I1203 00:16:57.623409 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:16:57 crc kubenswrapper[3561]: I1203 00:16:57.623511 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:16:59 crc kubenswrapper[3561]: I1203 00:16:59.513162 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 00:17:01 crc kubenswrapper[3561]: I1203 00:17:01.531321 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 00:17:03 crc kubenswrapper[3561]: I1203 00:17:03.310654 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 00:17:04 crc kubenswrapper[3561]: I1203 00:17:04.204695 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.404511 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.405100 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" containerID="cri-o://9c3fe397c3f6654e9143c043f198cb80563bc16f072d9cae72569e31c3664a09" gracePeriod=30 Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.419816 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.420064 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" containerID="cri-o://0a2a7c0317d4ab34cc836f681bceabbfcfdc0bfd8adbbace822d794922eb11a8" gracePeriod=30 Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.788803 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.847076 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.956908 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"1a3e81c3-c292-4130-9436-f94062c91efd\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.956967 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"1a3e81c3-c292-4130-9436-f94062c91efd\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.957020 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"21d29937-debd-4407-b2b1-d1053cb0f342\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.957051 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"1a3e81c3-c292-4130-9436-f94062c91efd\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.957093 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"1a3e81c3-c292-4130-9436-f94062c91efd\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.957123 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"21d29937-debd-4407-b2b1-d1053cb0f342\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.957161 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"1a3e81c3-c292-4130-9436-f94062c91efd\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.957186 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"21d29937-debd-4407-b2b1-d1053cb0f342\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.957217 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"21d29937-debd-4407-b2b1-d1053cb0f342\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.958073 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca" (OuterVolumeSpecName: "client-ca") pod "21d29937-debd-4407-b2b1-d1053cb0f342" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.958309 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca" (OuterVolumeSpecName: "client-ca") pod "1a3e81c3-c292-4130-9436-f94062c91efd" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.958573 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config" (OuterVolumeSpecName: "config") pod "21d29937-debd-4407-b2b1-d1053cb0f342" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.958581 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1a3e81c3-c292-4130-9436-f94062c91efd" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.958711 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config" (OuterVolumeSpecName: "config") pod "1a3e81c3-c292-4130-9436-f94062c91efd" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.962769 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1a3e81c3-c292-4130-9436-f94062c91efd" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.962779 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "21d29937-debd-4407-b2b1-d1053cb0f342" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.963143 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4" (OuterVolumeSpecName: "kube-api-access-pkhl4") pod "1a3e81c3-c292-4130-9436-f94062c91efd" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd"). InnerVolumeSpecName "kube-api-access-pkhl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:17:05 crc kubenswrapper[3561]: I1203 00:17:05.963246 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr" (OuterVolumeSpecName: "kube-api-access-v7vkr") pod "21d29937-debd-4407-b2b1-d1053cb0f342" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342"). InnerVolumeSpecName "kube-api-access-v7vkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.058471 3561 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.058535 3561 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.058578 3561 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.058592 3561 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.058610 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.058625 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.058640 3561 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.058654 3561 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.058667 3561 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.098643 3561 generic.go:334] "Generic (PLEG): container finished" podID="1a3e81c3-c292-4130-9436-f94062c91efd" containerID="0a2a7c0317d4ab34cc836f681bceabbfcfdc0bfd8adbbace822d794922eb11a8" exitCode=0 Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.098755 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerDied","Data":"0a2a7c0317d4ab34cc836f681bceabbfcfdc0bfd8adbbace822d794922eb11a8"} Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.098791 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerDied","Data":"1fb8f4bda5b0f9dcd4cb47407fae331100c2697ea1d031206c6e23f6dadf0143"} Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.098810 3561 scope.go:117] "RemoveContainer" containerID="0a2a7c0317d4ab34cc836f681bceabbfcfdc0bfd8adbbace822d794922eb11a8" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.099646 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.101753 3561 generic.go:334] "Generic (PLEG): container finished" podID="21d29937-debd-4407-b2b1-d1053cb0f342" containerID="9c3fe397c3f6654e9143c043f198cb80563bc16f072d9cae72569e31c3664a09" exitCode=0 Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.101811 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerDied","Data":"9c3fe397c3f6654e9143c043f198cb80563bc16f072d9cae72569e31c3664a09"} Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.101848 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerDied","Data":"5e6078e04b080b0d75c3506d920566fafd2f6ffa4fb92214adaa8e896518e379"} Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.101921 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.151840 3561 scope.go:117] "RemoveContainer" containerID="0a2a7c0317d4ab34cc836f681bceabbfcfdc0bfd8adbbace822d794922eb11a8" Dec 03 00:17:06 crc kubenswrapper[3561]: E1203 00:17:06.152685 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a2a7c0317d4ab34cc836f681bceabbfcfdc0bfd8adbbace822d794922eb11a8\": container with ID starting with 0a2a7c0317d4ab34cc836f681bceabbfcfdc0bfd8adbbace822d794922eb11a8 not found: ID does not exist" containerID="0a2a7c0317d4ab34cc836f681bceabbfcfdc0bfd8adbbace822d794922eb11a8" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.152785 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a2a7c0317d4ab34cc836f681bceabbfcfdc0bfd8adbbace822d794922eb11a8"} err="failed to get container status \"0a2a7c0317d4ab34cc836f681bceabbfcfdc0bfd8adbbace822d794922eb11a8\": rpc error: code = NotFound desc = could not find container \"0a2a7c0317d4ab34cc836f681bceabbfcfdc0bfd8adbbace822d794922eb11a8\": container with ID starting with 0a2a7c0317d4ab34cc836f681bceabbfcfdc0bfd8adbbace822d794922eb11a8 not found: ID does not exist" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.152828 3561 scope.go:117] "RemoveContainer" containerID="9c3fe397c3f6654e9143c043f198cb80563bc16f072d9cae72569e31c3664a09" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.165532 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.169908 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.179060 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.182622 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.192285 3561 scope.go:117] "RemoveContainer" containerID="9c3fe397c3f6654e9143c043f198cb80563bc16f072d9cae72569e31c3664a09" Dec 03 00:17:06 crc kubenswrapper[3561]: E1203 00:17:06.192912 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c3fe397c3f6654e9143c043f198cb80563bc16f072d9cae72569e31c3664a09\": container with ID starting with 9c3fe397c3f6654e9143c043f198cb80563bc16f072d9cae72569e31c3664a09 not found: ID does not exist" containerID="9c3fe397c3f6654e9143c043f198cb80563bc16f072d9cae72569e31c3664a09" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.192973 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c3fe397c3f6654e9143c043f198cb80563bc16f072d9cae72569e31c3664a09"} err="failed to get container status \"9c3fe397c3f6654e9143c043f198cb80563bc16f072d9cae72569e31c3664a09\": rpc error: code = NotFound desc = could not find container \"9c3fe397c3f6654e9143c043f198cb80563bc16f072d9cae72569e31c3664a09\": container with ID starting with 9c3fe397c3f6654e9143c043f198cb80563bc16f072d9cae72569e31c3664a09 not found: ID does not exist" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.643471 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77cfc55b7-q9tts"] Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.643722 3561 topology_manager.go:215] "Topology Admit Handler" podUID="3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca" podNamespace="openshift-controller-manager" podName="controller-manager-77cfc55b7-q9tts" Dec 03 00:17:06 crc kubenswrapper[3561]: E1203 00:17:06.644114 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" containerName="installer" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.644164 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" containerName="installer" Dec 03 00:17:06 crc kubenswrapper[3561]: E1203 00:17:06.644207 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.644230 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" Dec 03 00:17:06 crc kubenswrapper[3561]: E1203 00:17:06.644263 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" containerName="registry" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.644282 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" containerName="registry" Dec 03 00:17:06 crc kubenswrapper[3561]: E1203 00:17:06.644311 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="7dae59545f22b3fb679a7fbf878a6379" containerName="startup-monitor" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.644330 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dae59545f22b3fb679a7fbf878a6379" containerName="startup-monitor" Dec 03 00:17:06 crc kubenswrapper[3561]: E1203 00:17:06.644356 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.644375 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.644722 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dae59545f22b3fb679a7fbf878a6379" containerName="startup-monitor" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.644760 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.644802 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb0f975b-347e-4c02-8f84-22a14ac75a3c" containerName="installer" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.644824 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.644846 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" containerName="registry" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.645731 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.650420 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.650715 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.651398 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.652722 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.654486 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849697f58-scdmx"] Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.654746 3561 topology_manager.go:215] "Topology Admit Handler" podUID="b43b4774-5a37-43cf-8696-3e2baca14524" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-849697f58-scdmx" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.654965 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.656042 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.661130 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.661526 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.661602 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.661648 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.661751 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.662993 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.665218 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77cfc55b7-q9tts"] Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.677641 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849697f58-scdmx"] Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.683933 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.685319 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.769324 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-serving-cert\") pod \"controller-manager-77cfc55b7-q9tts\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.769421 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-client-ca\") pod \"controller-manager-77cfc55b7-q9tts\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.770443 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjgc5\" (UniqueName: \"kubernetes.io/projected/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-kube-api-access-rjgc5\") pod \"controller-manager-77cfc55b7-q9tts\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.770514 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b43b4774-5a37-43cf-8696-3e2baca14524-serving-cert\") pod \"route-controller-manager-849697f58-scdmx\" (UID: \"b43b4774-5a37-43cf-8696-3e2baca14524\") " pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.770589 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b43b4774-5a37-43cf-8696-3e2baca14524-config\") pod \"route-controller-manager-849697f58-scdmx\" (UID: \"b43b4774-5a37-43cf-8696-3e2baca14524\") " pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.770645 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-proxy-ca-bundles\") pod \"controller-manager-77cfc55b7-q9tts\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.770693 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhjnc\" (UniqueName: \"kubernetes.io/projected/b43b4774-5a37-43cf-8696-3e2baca14524-kube-api-access-bhjnc\") pod \"route-controller-manager-849697f58-scdmx\" (UID: \"b43b4774-5a37-43cf-8696-3e2baca14524\") " pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.770746 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b43b4774-5a37-43cf-8696-3e2baca14524-client-ca\") pod \"route-controller-manager-849697f58-scdmx\" (UID: \"b43b4774-5a37-43cf-8696-3e2baca14524\") " pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.770794 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-config\") pod \"controller-manager-77cfc55b7-q9tts\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.872442 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-config\") pod \"controller-manager-77cfc55b7-q9tts\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.872568 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-serving-cert\") pod \"controller-manager-77cfc55b7-q9tts\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.872637 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-client-ca\") pod \"controller-manager-77cfc55b7-q9tts\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.872710 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rjgc5\" (UniqueName: \"kubernetes.io/projected/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-kube-api-access-rjgc5\") pod \"controller-manager-77cfc55b7-q9tts\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.872763 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b43b4774-5a37-43cf-8696-3e2baca14524-serving-cert\") pod \"route-controller-manager-849697f58-scdmx\" (UID: \"b43b4774-5a37-43cf-8696-3e2baca14524\") " pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.872870 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b43b4774-5a37-43cf-8696-3e2baca14524-config\") pod \"route-controller-manager-849697f58-scdmx\" (UID: \"b43b4774-5a37-43cf-8696-3e2baca14524\") " pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.872972 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-proxy-ca-bundles\") pod \"controller-manager-77cfc55b7-q9tts\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.873221 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bhjnc\" (UniqueName: \"kubernetes.io/projected/b43b4774-5a37-43cf-8696-3e2baca14524-kube-api-access-bhjnc\") pod \"route-controller-manager-849697f58-scdmx\" (UID: \"b43b4774-5a37-43cf-8696-3e2baca14524\") " pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.873282 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b43b4774-5a37-43cf-8696-3e2baca14524-client-ca\") pod \"route-controller-manager-849697f58-scdmx\" (UID: \"b43b4774-5a37-43cf-8696-3e2baca14524\") " pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.875653 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-client-ca\") pod \"controller-manager-77cfc55b7-q9tts\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.876073 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b43b4774-5a37-43cf-8696-3e2baca14524-config\") pod \"route-controller-manager-849697f58-scdmx\" (UID: \"b43b4774-5a37-43cf-8696-3e2baca14524\") " pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.877221 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-config\") pod \"controller-manager-77cfc55b7-q9tts\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.877923 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-proxy-ca-bundles\") pod \"controller-manager-77cfc55b7-q9tts\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.880724 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b43b4774-5a37-43cf-8696-3e2baca14524-client-ca\") pod \"route-controller-manager-849697f58-scdmx\" (UID: \"b43b4774-5a37-43cf-8696-3e2baca14524\") " pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.889952 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b43b4774-5a37-43cf-8696-3e2baca14524-serving-cert\") pod \"route-controller-manager-849697f58-scdmx\" (UID: \"b43b4774-5a37-43cf-8696-3e2baca14524\") " pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.894111 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-serving-cert\") pod \"controller-manager-77cfc55b7-q9tts\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.906388 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhjnc\" (UniqueName: \"kubernetes.io/projected/b43b4774-5a37-43cf-8696-3e2baca14524-kube-api-access-bhjnc\") pod \"route-controller-manager-849697f58-scdmx\" (UID: \"b43b4774-5a37-43cf-8696-3e2baca14524\") " pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.908858 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjgc5\" (UniqueName: \"kubernetes.io/projected/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-kube-api-access-rjgc5\") pod \"controller-manager-77cfc55b7-q9tts\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:06 crc kubenswrapper[3561]: I1203 00:17:06.988688 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:07 crc kubenswrapper[3561]: I1203 00:17:07.020320 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:17:07 crc kubenswrapper[3561]: I1203 00:17:07.188981 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 00:17:07 crc kubenswrapper[3561]: I1203 00:17:07.288888 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849697f58-scdmx"] Dec 03 00:17:07 crc kubenswrapper[3561]: I1203 00:17:07.457952 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77cfc55b7-q9tts"] Dec 03 00:17:07 crc kubenswrapper[3561]: I1203 00:17:07.673684 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" path="/var/lib/kubelet/pods/1a3e81c3-c292-4130-9436-f94062c91efd/volumes" Dec 03 00:17:07 crc kubenswrapper[3561]: I1203 00:17:07.676655 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" path="/var/lib/kubelet/pods/21d29937-debd-4407-b2b1-d1053cb0f342/volumes" Dec 03 00:17:07 crc kubenswrapper[3561]: I1203 00:17:07.776407 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-44qcg"] Dec 03 00:17:07 crc kubenswrapper[3561]: I1203 00:17:07.776748 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-controller" containerID="cri-o://8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605" gracePeriod=30 Dec 03 00:17:07 crc kubenswrapper[3561]: I1203 00:17:07.776836 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="nbdb" containerID="cri-o://2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025" gracePeriod=30 Dec 03 00:17:07 crc kubenswrapper[3561]: I1203 00:17:07.776880 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-node" containerID="cri-o://f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a" gracePeriod=30 Dec 03 00:17:07 crc kubenswrapper[3561]: I1203 00:17:07.776951 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-acl-logging" containerID="cri-o://0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a" gracePeriod=30 Dec 03 00:17:07 crc kubenswrapper[3561]: I1203 00:17:07.777002 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="northd" containerID="cri-o://b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026" gracePeriod=30 Dec 03 00:17:07 crc kubenswrapper[3561]: I1203 00:17:07.776966 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="sbdb" containerID="cri-o://24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa" gracePeriod=30 Dec 03 00:17:07 crc kubenswrapper[3561]: I1203 00:17:07.777186 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8" gracePeriod=30 Dec 03 00:17:07 crc kubenswrapper[3561]: I1203 00:17:07.853218 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" containerID="cri-o://32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a" gracePeriod=30 Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.128746 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" event={"ID":"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca","Type":"ContainerStarted","Data":"7738a323c0ee58b00da9a535bbcafcdaccff2de75ca9f5f12385ecde3d2a66bf"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.128781 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" event={"ID":"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca","Type":"ContainerStarted","Data":"de4aebe274038d7b37ad964ff356a132b8f609695e75f636e098f678631e8ccb"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.130131 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/7.log" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.133882 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/6.log" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.133925 3561 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="83e5851fa9757464d7d57e36e5eb573f39fcbee9a3bd0805c37da4e2998af6a2" exitCode=2 Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.133976 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"83e5851fa9757464d7d57e36e5eb573f39fcbee9a3bd0805c37da4e2998af6a2"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.134001 3561 scope.go:117] "RemoveContainer" containerID="25e251c91f998883cec92448e57ffcbd0f46f7190f3879fe24b99ae2240a1795" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.134482 3561 scope.go:117] "RemoveContainer" containerID="83e5851fa9757464d7d57e36e5eb573f39fcbee9a3bd0805c37da4e2998af6a2" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.134914 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.137774 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" event={"ID":"b43b4774-5a37-43cf-8696-3e2baca14524","Type":"ContainerStarted","Data":"f7e2355afb12189c7ca521d8987f9ca9e2f9192866bc135a889b6928212de35b"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.137808 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" event={"ID":"b43b4774-5a37-43cf-8696-3e2baca14524","Type":"ContainerStarted","Data":"54f92f70bb9df43e6d2859cf3536ec8eb8567941ec8e0c02bbb731efd6856dc9"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.139722 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovn-acl-logging/1.log" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.140410 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovn-controller/1.log" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.140733 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.141423 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovn-acl-logging/1.log" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.142153 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovn-controller/1.log" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.142875 3561 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a" exitCode=0 Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.142965 3561 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa" exitCode=0 Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143032 3561 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025" exitCode=0 Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143113 3561 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026" exitCode=0 Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143127 3561 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8" exitCode=0 Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143139 3561 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a" exitCode=0 Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143149 3561 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a" exitCode=143 Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143160 3561 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605" exitCode=143 Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.142934 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143194 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143207 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143216 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143226 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143236 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143250 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143257 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143264 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143270 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143277 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143285 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143291 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143299 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143305 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143311 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143318 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143324 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143329 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143335 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143343 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143351 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143358 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143364 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143369 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143375 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143381 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143386 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143392 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.143398 3561 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d"} Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.154465 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" podStartSLOduration=3.154425285 podStartE2EDuration="3.154425285s" podCreationTimestamp="2025-12-03 00:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:17:08.152271557 +0000 UTC m=+626.932705825" watchObservedRunningTime="2025-12-03 00:17:08.154425285 +0000 UTC m=+626.934859543" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.167503 3561 scope.go:117] "RemoveContainer" containerID="32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.196300 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.207891 3561 scope.go:117] "RemoveContainer" containerID="24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.225855 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mcgp6"] Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.226254 3561 topology_manager.go:215] "Topology Admit Handler" podUID="be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a" podNamespace="openshift-ovn-kubernetes" podName="ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.226566 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kubecfg-setup" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.226681 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kubecfg-setup" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.226776 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-node" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.227008 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-node" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.227103 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="northd" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.227178 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="northd" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.227248 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.227308 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.227373 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="sbdb" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.227453 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="sbdb" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.227560 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="nbdb" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.227717 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="nbdb" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.228335 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-acl-logging" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.228399 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-acl-logging" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.228486 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-controller" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.228566 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-controller" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.228657 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.228725 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.229006 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-node" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.229110 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="nbdb" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.229645 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-controller" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.229732 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.229814 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="sbdb" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.229901 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="northd" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.229999 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.230083 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-acl-logging" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.232708 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.234198 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" podStartSLOduration=3.234152652 podStartE2EDuration="3.234152652s" podCreationTimestamp="2025-12-03 00:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:17:08.230912241 +0000 UTC m=+627.011346509" watchObservedRunningTime="2025-12-03 00:17:08.234152652 +0000 UTC m=+627.014586930" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.236353 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-jpwlq" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.239952 3561 scope.go:117] "RemoveContainer" containerID="2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.270106 3561 scope.go:117] "RemoveContainer" containerID="b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293303 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293355 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293395 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293415 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293437 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293453 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293473 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293492 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293510 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293534 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293579 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293598 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293616 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293641 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293668 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293687 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293708 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293731 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.293750 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.294342 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.294413 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.294439 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.294464 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.294487 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.294512 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.294534 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.294577 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.294600 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket" (OuterVolumeSpecName: "log-socket") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.295685 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.295944 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash" (OuterVolumeSpecName: "host-slash") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.295968 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.295983 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.296328 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.296591 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.297266 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.296680 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log" (OuterVolumeSpecName: "node-log") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.298778 3561 scope.go:117] "RemoveContainer" containerID="425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.311187 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495" (OuterVolumeSpecName: "kube-api-access-f9495") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "kube-api-access-f9495". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.315562 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.360203 3561 scope.go:117] "RemoveContainer" containerID="f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.394996 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-env-overrides\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.395058 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-node-log\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.395125 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-run-netns\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.395641 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-etc-openvswitch\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.395777 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-ovn-node-metrics-cert\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.395835 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-var-lib-openvswitch\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.395863 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-cni-bin\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.395932 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-kubelet\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.395980 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2p4r\" (UniqueName: \"kubernetes.io/projected/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-kube-api-access-s2p4r\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396006 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-ovnkube-config\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396028 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-slash\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396065 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-cni-netd\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396099 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-run-openvswitch\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396117 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-log-socket\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396137 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-systemd-units\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396167 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-ovnkube-script-lib\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396194 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-run-ovn\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396213 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396246 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-run-ovn-kubernetes\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396290 3561 reconciler_common.go:300] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396303 3561 reconciler_common.go:300] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396314 3561 reconciler_common.go:300] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396324 3561 reconciler_common.go:300] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396334 3561 reconciler_common.go:300] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396343 3561 reconciler_common.go:300] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396353 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396363 3561 reconciler_common.go:300] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396374 3561 reconciler_common.go:300] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396385 3561 reconciler_common.go:300] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396396 3561 reconciler_common.go:300] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396407 3561 reconciler_common.go:300] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396417 3561 reconciler_common.go:300] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396426 3561 reconciler_common.go:300] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396435 3561 reconciler_common.go:300] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396444 3561 reconciler_common.go:300] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396454 3561 reconciler_common.go:300] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396464 3561 reconciler_common.go:300] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396473 3561 reconciler_common.go:300] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.396843 3561 scope.go:117] "RemoveContainer" containerID="0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.420113 3561 scope.go:117] "RemoveContainer" containerID="8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.448138 3561 scope.go:117] "RemoveContainer" containerID="7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.486140 3561 scope.go:117] "RemoveContainer" containerID="32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.486665 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a\": container with ID starting with 32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a not found: ID does not exist" containerID="32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.486709 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a"} err="failed to get container status \"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a\": rpc error: code = NotFound desc = could not find container \"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a\": container with ID starting with 32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.486718 3561 scope.go:117] "RemoveContainer" containerID="24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.487103 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa\": container with ID starting with 24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa not found: ID does not exist" containerID="24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.487138 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa"} err="failed to get container status \"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa\": rpc error: code = NotFound desc = could not find container \"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa\": container with ID starting with 24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.487149 3561 scope.go:117] "RemoveContainer" containerID="2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.487420 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025\": container with ID starting with 2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025 not found: ID does not exist" containerID="2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.487445 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025"} err="failed to get container status \"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025\": rpc error: code = NotFound desc = could not find container \"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025\": container with ID starting with 2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.487453 3561 scope.go:117] "RemoveContainer" containerID="b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.487924 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026\": container with ID starting with b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026 not found: ID does not exist" containerID="b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.487977 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026"} err="failed to get container status \"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026\": rpc error: code = NotFound desc = could not find container \"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026\": container with ID starting with b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.487992 3561 scope.go:117] "RemoveContainer" containerID="425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.488310 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8\": container with ID starting with 425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8 not found: ID does not exist" containerID="425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.488337 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8"} err="failed to get container status \"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8\": rpc error: code = NotFound desc = could not find container \"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8\": container with ID starting with 425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.488346 3561 scope.go:117] "RemoveContainer" containerID="f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.488714 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a\": container with ID starting with f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a not found: ID does not exist" containerID="f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.488744 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a"} err="failed to get container status \"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a\": rpc error: code = NotFound desc = could not find container \"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a\": container with ID starting with f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.488755 3561 scope.go:117] "RemoveContainer" containerID="0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.489047 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a\": container with ID starting with 0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a not found: ID does not exist" containerID="0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.489089 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a"} err="failed to get container status \"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a\": rpc error: code = NotFound desc = could not find container \"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a\": container with ID starting with 0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.489100 3561 scope.go:117] "RemoveContainer" containerID="8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.489381 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605\": container with ID starting with 8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605 not found: ID does not exist" containerID="8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.489421 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605"} err="failed to get container status \"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605\": rpc error: code = NotFound desc = could not find container \"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605\": container with ID starting with 8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.489435 3561 scope.go:117] "RemoveContainer" containerID="7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d" Dec 03 00:17:08 crc kubenswrapper[3561]: E1203 00:17:08.489817 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d\": container with ID starting with 7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d not found: ID does not exist" containerID="7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.489844 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d"} err="failed to get container status \"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d\": rpc error: code = NotFound desc = could not find container \"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d\": container with ID starting with 7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.489854 3561 scope.go:117] "RemoveContainer" containerID="32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.490517 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a"} err="failed to get container status \"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a\": rpc error: code = NotFound desc = could not find container \"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a\": container with ID starting with 32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.490583 3561 scope.go:117] "RemoveContainer" containerID="24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.490824 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa"} err="failed to get container status \"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa\": rpc error: code = NotFound desc = could not find container \"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa\": container with ID starting with 24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.490846 3561 scope.go:117] "RemoveContainer" containerID="2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.491062 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025"} err="failed to get container status \"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025\": rpc error: code = NotFound desc = could not find container \"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025\": container with ID starting with 2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.491081 3561 scope.go:117] "RemoveContainer" containerID="b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.491329 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026"} err="failed to get container status \"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026\": rpc error: code = NotFound desc = could not find container \"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026\": container with ID starting with b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.491348 3561 scope.go:117] "RemoveContainer" containerID="425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.492220 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8"} err="failed to get container status \"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8\": rpc error: code = NotFound desc = could not find container \"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8\": container with ID starting with 425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.492236 3561 scope.go:117] "RemoveContainer" containerID="f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.494133 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a"} err="failed to get container status \"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a\": rpc error: code = NotFound desc = could not find container \"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a\": container with ID starting with f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.494152 3561 scope.go:117] "RemoveContainer" containerID="0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.494448 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a"} err="failed to get container status \"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a\": rpc error: code = NotFound desc = could not find container \"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a\": container with ID starting with 0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.494467 3561 scope.go:117] "RemoveContainer" containerID="8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.494751 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605"} err="failed to get container status \"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605\": rpc error: code = NotFound desc = could not find container \"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605\": container with ID starting with 8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.494766 3561 scope.go:117] "RemoveContainer" containerID="7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.495023 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d"} err="failed to get container status \"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d\": rpc error: code = NotFound desc = could not find container \"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d\": container with ID starting with 7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.495041 3561 scope.go:117] "RemoveContainer" containerID="32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.495833 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a"} err="failed to get container status \"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a\": rpc error: code = NotFound desc = could not find container \"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a\": container with ID starting with 32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.495852 3561 scope.go:117] "RemoveContainer" containerID="24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.496729 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-var-lib-openvswitch\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.496766 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-cni-bin\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.496798 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-kubelet\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.496844 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-s2p4r\" (UniqueName: \"kubernetes.io/projected/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-kube-api-access-s2p4r\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.496869 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-ovnkube-config\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.496894 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-slash\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.496925 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-cni-netd\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.496951 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-run-openvswitch\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.496974 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-log-socket\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497000 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-systemd-units\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497024 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-run-ovn\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497046 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-ovnkube-script-lib\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497104 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497136 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-run-ovn-kubernetes\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497173 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-env-overrides\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497197 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-node-log\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497222 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-run-netns\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497252 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-etc-openvswitch\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497297 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-ovn-node-metrics-cert\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497486 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-log-socket\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497584 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-var-lib-openvswitch\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497585 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-cni-bin\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497613 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-cni-netd\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497625 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-kubelet\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497649 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-run-ovn-kubernetes\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497654 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-run-ovn\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497668 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-slash\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497673 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-systemd-units\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497699 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-node-log\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497742 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497779 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-host-run-netns\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497825 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-run-openvswitch\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.497970 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-etc-openvswitch\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.498168 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa"} err="failed to get container status \"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa\": rpc error: code = NotFound desc = could not find container \"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa\": container with ID starting with 24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.498355 3561 scope.go:117] "RemoveContainer" containerID="2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.498193 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-ovnkube-script-lib\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.498396 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-env-overrides\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.498840 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025"} err="failed to get container status \"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025\": rpc error: code = NotFound desc = could not find container \"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025\": container with ID starting with 2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.498883 3561 scope.go:117] "RemoveContainer" containerID="b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.498943 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-ovnkube-config\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.499255 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026"} err="failed to get container status \"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026\": rpc error: code = NotFound desc = could not find container \"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026\": container with ID starting with b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.499292 3561 scope.go:117] "RemoveContainer" containerID="425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.499565 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8"} err="failed to get container status \"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8\": rpc error: code = NotFound desc = could not find container \"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8\": container with ID starting with 425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.499580 3561 scope.go:117] "RemoveContainer" containerID="f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.499868 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a"} err="failed to get container status \"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a\": rpc error: code = NotFound desc = could not find container \"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a\": container with ID starting with f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.499884 3561 scope.go:117] "RemoveContainer" containerID="0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.500260 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a"} err="failed to get container status \"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a\": rpc error: code = NotFound desc = could not find container \"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a\": container with ID starting with 0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.500280 3561 scope.go:117] "RemoveContainer" containerID="8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.500523 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-ovn-node-metrics-cert\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.500600 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605"} err="failed to get container status \"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605\": rpc error: code = NotFound desc = could not find container \"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605\": container with ID starting with 8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.500619 3561 scope.go:117] "RemoveContainer" containerID="7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.500956 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d"} err="failed to get container status \"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d\": rpc error: code = NotFound desc = could not find container \"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d\": container with ID starting with 7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.500975 3561 scope.go:117] "RemoveContainer" containerID="32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.503812 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a"} err="failed to get container status \"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a\": rpc error: code = NotFound desc = could not find container \"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a\": container with ID starting with 32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.503858 3561 scope.go:117] "RemoveContainer" containerID="24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.504252 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa"} err="failed to get container status \"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa\": rpc error: code = NotFound desc = could not find container \"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa\": container with ID starting with 24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.504284 3561 scope.go:117] "RemoveContainer" containerID="2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.504610 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025"} err="failed to get container status \"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025\": rpc error: code = NotFound desc = could not find container \"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025\": container with ID starting with 2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.504640 3561 scope.go:117] "RemoveContainer" containerID="b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.505205 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026"} err="failed to get container status \"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026\": rpc error: code = NotFound desc = could not find container \"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026\": container with ID starting with b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.505228 3561 scope.go:117] "RemoveContainer" containerID="425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.505589 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8"} err="failed to get container status \"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8\": rpc error: code = NotFound desc = could not find container \"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8\": container with ID starting with 425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.505616 3561 scope.go:117] "RemoveContainer" containerID="f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.506029 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a"} err="failed to get container status \"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a\": rpc error: code = NotFound desc = could not find container \"f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a\": container with ID starting with f734df8ebf989bf0f0e5e01a8c20cc5638d62c5a576afc61fb2400c237e5506a not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.506052 3561 scope.go:117] "RemoveContainer" containerID="0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.506316 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a"} err="failed to get container status \"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a\": rpc error: code = NotFound desc = could not find container \"0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a\": container with ID starting with 0b71f3daf5e809cb096d29608c23b7cd8c549a6846246e5ea0e93cb0e5e6724a not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.506342 3561 scope.go:117] "RemoveContainer" containerID="8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.506632 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605"} err="failed to get container status \"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605\": rpc error: code = NotFound desc = could not find container \"8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605\": container with ID starting with 8809a30cf6955cf7a4541f713af8bafd8543ca55d9142eb9165803a77b789605 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.506656 3561 scope.go:117] "RemoveContainer" containerID="7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.506927 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d"} err="failed to get container status \"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d\": rpc error: code = NotFound desc = could not find container \"7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d\": container with ID starting with 7a8c096353bc1b2646690f348cc0d5222d1d8c5512cddeb66602aca5f11ac51d not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.506951 3561 scope.go:117] "RemoveContainer" containerID="32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.507349 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a"} err="failed to get container status \"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a\": rpc error: code = NotFound desc = could not find container \"32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a\": container with ID starting with 32e181c5f7cffc525fdc721053dc38476f74864fc80b3354033381bdd536a26a not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.507369 3561 scope.go:117] "RemoveContainer" containerID="24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.507688 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa"} err="failed to get container status \"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa\": rpc error: code = NotFound desc = could not find container \"24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa\": container with ID starting with 24be3e7cfbd0c68547326e04492704bee2386d43dd2975c119ea0477da7d51aa not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.507712 3561 scope.go:117] "RemoveContainer" containerID="2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.507952 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025"} err="failed to get container status \"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025\": rpc error: code = NotFound desc = could not find container \"2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025\": container with ID starting with 2e540307d6e7f931c94b487f8a1a9f4135ad019e8afbd0d2677db958dd64b025 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.507974 3561 scope.go:117] "RemoveContainer" containerID="b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.508230 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026"} err="failed to get container status \"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026\": rpc error: code = NotFound desc = could not find container \"b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026\": container with ID starting with b1b32d14acb65dbb557257d26c05a2c49a2ef0bc80ef349bc1f1d6c7a9910026 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.508252 3561 scope.go:117] "RemoveContainer" containerID="425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.508605 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8"} err="failed to get container status \"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8\": rpc error: code = NotFound desc = could not find container \"425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8\": container with ID starting with 425aec8c17373b82a719da8cd00990db02377be093bcdc71509788e0ef17e0b8 not found: ID does not exist" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.524355 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2p4r\" (UniqueName: \"kubernetes.io/projected/be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a-kube-api-access-s2p4r\") pod \"ovnkube-node-mcgp6\" (UID: \"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:08 crc kubenswrapper[3561]: I1203 00:17:08.547290 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:09 crc kubenswrapper[3561]: I1203 00:17:09.067269 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 00:17:09 crc kubenswrapper[3561]: I1203 00:17:09.148483 3561 generic.go:334] "Generic (PLEG): container finished" podID="be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a" containerID="87da5e4bf03041cb05bfc640d9cc5a63043b24d2db766482b9a8c3c857e5de6a" exitCode=0 Dec 03 00:17:09 crc kubenswrapper[3561]: I1203 00:17:09.148567 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" event={"ID":"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a","Type":"ContainerDied","Data":"87da5e4bf03041cb05bfc640d9cc5a63043b24d2db766482b9a8c3c857e5de6a"} Dec 03 00:17:09 crc kubenswrapper[3561]: I1203 00:17:09.148643 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" event={"ID":"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a","Type":"ContainerStarted","Data":"60d6a84350e2da3803c7e5a7ab8f296d36de3fa569362eabb8c592593e02dfc2"} Dec 03 00:17:09 crc kubenswrapper[3561]: I1203 00:17:09.150192 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/7.log" Dec 03 00:17:09 crc kubenswrapper[3561]: I1203 00:17:09.151417 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 03 00:17:09 crc kubenswrapper[3561]: I1203 00:17:09.151440 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"971bf6fca4ccfe7795ea978c89b627f475353cdad241f283e929bd4958a5aaf7"} Dec 03 00:17:09 crc kubenswrapper[3561]: I1203 00:17:09.151807 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:09 crc kubenswrapper[3561]: I1203 00:17:09.152043 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:17:09 crc kubenswrapper[3561]: I1203 00:17:09.169681 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:09 crc kubenswrapper[3561]: I1203 00:17:09.171477 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:17:09 crc kubenswrapper[3561]: I1203 00:17:09.260423 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-44qcg"] Dec 03 00:17:09 crc kubenswrapper[3561]: I1203 00:17:09.263697 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-44qcg"] Dec 03 00:17:09 crc kubenswrapper[3561]: I1203 00:17:09.672417 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" path="/var/lib/kubelet/pods/3e19f9e8-9a37-4ca8-9790-c219750ab482/volumes" Dec 03 00:17:09 crc kubenswrapper[3561]: I1203 00:17:09.822677 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 00:17:09 crc kubenswrapper[3561]: I1203 00:17:09.998133 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 00:17:10 crc kubenswrapper[3561]: I1203 00:17:10.159683 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" event={"ID":"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a","Type":"ContainerStarted","Data":"0e7732a8d66695338d7755e79a12915d85444a1d534d11e49693e9f4839f9be0"} Dec 03 00:17:10 crc kubenswrapper[3561]: I1203 00:17:10.159734 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" event={"ID":"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a","Type":"ContainerStarted","Data":"255af5d06700f2cc5075728d04d0bf8c889ccf1e5686046d392dcfb4f4d83582"} Dec 03 00:17:10 crc kubenswrapper[3561]: I1203 00:17:10.159748 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" event={"ID":"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a","Type":"ContainerStarted","Data":"c281e0fc600a9eb44ceeb0abd0e4873234afbfb5bbe141fca6174aa9170b4dbe"} Dec 03 00:17:10 crc kubenswrapper[3561]: I1203 00:17:10.159763 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" event={"ID":"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a","Type":"ContainerStarted","Data":"238b847b952d73ee71b109077c03256bbe0106abc049622ca7614153aa17cafb"} Dec 03 00:17:10 crc kubenswrapper[3561]: I1203 00:17:10.159777 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" event={"ID":"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a","Type":"ContainerStarted","Data":"3497bac26a4c556c4eb0d10036b72439e48e0c0bfd274cbbea8ebaf95b42b53b"} Dec 03 00:17:11 crc kubenswrapper[3561]: I1203 00:17:11.169520 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" event={"ID":"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a","Type":"ContainerStarted","Data":"eee22df3d63809966989a5f707930b88f82ab3622e184052415d900ba0afc040"} Dec 03 00:17:13 crc kubenswrapper[3561]: I1203 00:17:13.192715 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" event={"ID":"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a","Type":"ContainerStarted","Data":"a3998f486fca9efb0bbb0992765298d3990630261c28e8121cbd814aecc3e200"} Dec 03 00:17:13 crc kubenswrapper[3561]: I1203 00:17:13.939346 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 00:17:15 crc kubenswrapper[3561]: I1203 00:17:15.206323 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" event={"ID":"be4b6e6b-c2a2-4deb-84ae-6bd9d72ce79a","Type":"ContainerStarted","Data":"44993b0c2e50d70e7c8fa0dd3bc8dd6688c234afc4977aa75bb1903cf204e1b0"} Dec 03 00:17:15 crc kubenswrapper[3561]: I1203 00:17:15.206656 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:15 crc kubenswrapper[3561]: I1203 00:17:15.206924 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:15 crc kubenswrapper[3561]: I1203 00:17:15.207111 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:15 crc kubenswrapper[3561]: I1203 00:17:15.239144 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" podStartSLOduration=7.239095718 podStartE2EDuration="7.239095718s" podCreationTimestamp="2025-12-03 00:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:17:15.233925447 +0000 UTC m=+634.014359725" watchObservedRunningTime="2025-12-03 00:17:15.239095718 +0000 UTC m=+634.019529996" Dec 03 00:17:15 crc kubenswrapper[3561]: I1203 00:17:15.259293 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:15 crc kubenswrapper[3561]: I1203 00:17:15.259356 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:15 crc kubenswrapper[3561]: I1203 00:17:15.329917 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 00:17:18 crc kubenswrapper[3561]: I1203 00:17:18.242303 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 00:17:20 crc kubenswrapper[3561]: I1203 00:17:20.653173 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 00:17:20 crc kubenswrapper[3561]: I1203 00:17:20.898648 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-79vsd" Dec 03 00:17:21 crc kubenswrapper[3561]: I1203 00:17:21.494158 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 00:17:22 crc kubenswrapper[3561]: I1203 00:17:22.664111 3561 scope.go:117] "RemoveContainer" containerID="83e5851fa9757464d7d57e36e5eb573f39fcbee9a3bd0805c37da4e2998af6a2" Dec 03 00:17:23 crc kubenswrapper[3561]: I1203 00:17:23.255200 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/7.log" Dec 03 00:17:23 crc kubenswrapper[3561]: I1203 00:17:23.255577 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"7f7441689cf28e03d869516364821578efc601b5083a506d2271a44cba8394f9"} Dec 03 00:17:25 crc kubenswrapper[3561]: I1203 00:17:25.402563 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 00:17:25 crc kubenswrapper[3561]: I1203 00:17:25.441140 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77cfc55b7-q9tts"] Dec 03 00:17:25 crc kubenswrapper[3561]: I1203 00:17:25.441423 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" podUID="3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca" containerName="controller-manager" containerID="cri-o://7738a323c0ee58b00da9a535bbcafcdaccff2de75ca9f5f12385ecde3d2a66bf" gracePeriod=30 Dec 03 00:17:25 crc kubenswrapper[3561]: I1203 00:17:25.998059 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.008517 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.108970 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-client-ca\") pod \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.109295 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-serving-cert\") pod \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.109335 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-config\") pod \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.109398 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-proxy-ca-bundles\") pod \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.109451 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjgc5\" (UniqueName: \"kubernetes.io/projected/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-kube-api-access-rjgc5\") pod \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\" (UID: \"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca\") " Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.110314 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-config" (OuterVolumeSpecName: "config") pod "3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca" (UID: "3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.110367 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca" (UID: "3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.110670 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-client-ca" (OuterVolumeSpecName: "client-ca") pod "3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca" (UID: "3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.114753 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-kube-api-access-rjgc5" (OuterVolumeSpecName: "kube-api-access-rjgc5") pod "3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca" (UID: "3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca"). InnerVolumeSpecName "kube-api-access-rjgc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.115074 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca" (UID: "3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.211013 3561 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.211073 3561 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.211099 3561 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.211124 3561 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.211147 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rjgc5\" (UniqueName: \"kubernetes.io/projected/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca-kube-api-access-rjgc5\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.272186 3561 generic.go:334] "Generic (PLEG): container finished" podID="3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca" containerID="7738a323c0ee58b00da9a535bbcafcdaccff2de75ca9f5f12385ecde3d2a66bf" exitCode=0 Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.272245 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" event={"ID":"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca","Type":"ContainerDied","Data":"7738a323c0ee58b00da9a535bbcafcdaccff2de75ca9f5f12385ecde3d2a66bf"} Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.272273 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" event={"ID":"3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca","Type":"ContainerDied","Data":"de4aebe274038d7b37ad964ff356a132b8f609695e75f636e098f678631e8ccb"} Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.272295 3561 scope.go:117] "RemoveContainer" containerID="7738a323c0ee58b00da9a535bbcafcdaccff2de75ca9f5f12385ecde3d2a66bf" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.272392 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77cfc55b7-q9tts" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.313358 3561 scope.go:117] "RemoveContainer" containerID="7738a323c0ee58b00da9a535bbcafcdaccff2de75ca9f5f12385ecde3d2a66bf" Dec 03 00:17:26 crc kubenswrapper[3561]: E1203 00:17:26.318880 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7738a323c0ee58b00da9a535bbcafcdaccff2de75ca9f5f12385ecde3d2a66bf\": container with ID starting with 7738a323c0ee58b00da9a535bbcafcdaccff2de75ca9f5f12385ecde3d2a66bf not found: ID does not exist" containerID="7738a323c0ee58b00da9a535bbcafcdaccff2de75ca9f5f12385ecde3d2a66bf" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.318954 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7738a323c0ee58b00da9a535bbcafcdaccff2de75ca9f5f12385ecde3d2a66bf"} err="failed to get container status \"7738a323c0ee58b00da9a535bbcafcdaccff2de75ca9f5f12385ecde3d2a66bf\": rpc error: code = NotFound desc = could not find container \"7738a323c0ee58b00da9a535bbcafcdaccff2de75ca9f5f12385ecde3d2a66bf\": container with ID starting with 7738a323c0ee58b00da9a535bbcafcdaccff2de75ca9f5f12385ecde3d2a66bf not found: ID does not exist" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.324151 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77cfc55b7-q9tts"] Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.328176 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77cfc55b7-q9tts"] Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.664060 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn"] Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.664212 3561 topology_manager.go:215] "Topology Admit Handler" podUID="4e5a00c5-3be4-4290-81ba-8cd496de3556" podNamespace="openshift-controller-manager" podName="controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:26 crc kubenswrapper[3561]: E1203 00:17:26.664575 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca" containerName="controller-manager" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.664608 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca" containerName="controller-manager" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.664828 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca" containerName="controller-manager" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.665460 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.670623 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.670833 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.671027 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.671191 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.672660 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.673246 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.682266 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn"] Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.682335 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.717344 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e5a00c5-3be4-4290-81ba-8cd496de3556-client-ca\") pod \"controller-manager-57bbb7f5cf-lltsn\" (UID: \"4e5a00c5-3be4-4290-81ba-8cd496de3556\") " pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.717419 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vt2t\" (UniqueName: \"kubernetes.io/projected/4e5a00c5-3be4-4290-81ba-8cd496de3556-kube-api-access-9vt2t\") pod \"controller-manager-57bbb7f5cf-lltsn\" (UID: \"4e5a00c5-3be4-4290-81ba-8cd496de3556\") " pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.717812 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e5a00c5-3be4-4290-81ba-8cd496de3556-config\") pod \"controller-manager-57bbb7f5cf-lltsn\" (UID: \"4e5a00c5-3be4-4290-81ba-8cd496de3556\") " pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.717980 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e5a00c5-3be4-4290-81ba-8cd496de3556-serving-cert\") pod \"controller-manager-57bbb7f5cf-lltsn\" (UID: \"4e5a00c5-3be4-4290-81ba-8cd496de3556\") " pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.718067 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e5a00c5-3be4-4290-81ba-8cd496de3556-proxy-ca-bundles\") pod \"controller-manager-57bbb7f5cf-lltsn\" (UID: \"4e5a00c5-3be4-4290-81ba-8cd496de3556\") " pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.819073 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e5a00c5-3be4-4290-81ba-8cd496de3556-config\") pod \"controller-manager-57bbb7f5cf-lltsn\" (UID: \"4e5a00c5-3be4-4290-81ba-8cd496de3556\") " pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.819132 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e5a00c5-3be4-4290-81ba-8cd496de3556-serving-cert\") pod \"controller-manager-57bbb7f5cf-lltsn\" (UID: \"4e5a00c5-3be4-4290-81ba-8cd496de3556\") " pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.819160 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e5a00c5-3be4-4290-81ba-8cd496de3556-proxy-ca-bundles\") pod \"controller-manager-57bbb7f5cf-lltsn\" (UID: \"4e5a00c5-3be4-4290-81ba-8cd496de3556\") " pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.819203 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e5a00c5-3be4-4290-81ba-8cd496de3556-client-ca\") pod \"controller-manager-57bbb7f5cf-lltsn\" (UID: \"4e5a00c5-3be4-4290-81ba-8cd496de3556\") " pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.819238 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9vt2t\" (UniqueName: \"kubernetes.io/projected/4e5a00c5-3be4-4290-81ba-8cd496de3556-kube-api-access-9vt2t\") pod \"controller-manager-57bbb7f5cf-lltsn\" (UID: \"4e5a00c5-3be4-4290-81ba-8cd496de3556\") " pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.820940 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e5a00c5-3be4-4290-81ba-8cd496de3556-client-ca\") pod \"controller-manager-57bbb7f5cf-lltsn\" (UID: \"4e5a00c5-3be4-4290-81ba-8cd496de3556\") " pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.821186 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e5a00c5-3be4-4290-81ba-8cd496de3556-config\") pod \"controller-manager-57bbb7f5cf-lltsn\" (UID: \"4e5a00c5-3be4-4290-81ba-8cd496de3556\") " pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.821526 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e5a00c5-3be4-4290-81ba-8cd496de3556-proxy-ca-bundles\") pod \"controller-manager-57bbb7f5cf-lltsn\" (UID: \"4e5a00c5-3be4-4290-81ba-8cd496de3556\") " pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.823211 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e5a00c5-3be4-4290-81ba-8cd496de3556-serving-cert\") pod \"controller-manager-57bbb7f5cf-lltsn\" (UID: \"4e5a00c5-3be4-4290-81ba-8cd496de3556\") " pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:26 crc kubenswrapper[3561]: I1203 00:17:26.856987 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vt2t\" (UniqueName: \"kubernetes.io/projected/4e5a00c5-3be4-4290-81ba-8cd496de3556-kube-api-access-9vt2t\") pod \"controller-manager-57bbb7f5cf-lltsn\" (UID: \"4e5a00c5-3be4-4290-81ba-8cd496de3556\") " pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:27 crc kubenswrapper[3561]: I1203 00:17:26.998939 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:27 crc kubenswrapper[3561]: I1203 00:17:27.224880 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn"] Dec 03 00:17:27 crc kubenswrapper[3561]: W1203 00:17:27.235816 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e5a00c5_3be4_4290_81ba_8cd496de3556.slice/crio-62d838e03fede845ee3525d30e048ecd3fefccf0100c19bcde1fda7e6bd2f4e5 WatchSource:0}: Error finding container 62d838e03fede845ee3525d30e048ecd3fefccf0100c19bcde1fda7e6bd2f4e5: Status 404 returned error can't find the container with id 62d838e03fede845ee3525d30e048ecd3fefccf0100c19bcde1fda7e6bd2f4e5 Dec 03 00:17:27 crc kubenswrapper[3561]: I1203 00:17:27.279932 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" event={"ID":"4e5a00c5-3be4-4290-81ba-8cd496de3556","Type":"ContainerStarted","Data":"62d838e03fede845ee3525d30e048ecd3fefccf0100c19bcde1fda7e6bd2f4e5"} Dec 03 00:17:27 crc kubenswrapper[3561]: I1203 00:17:27.623196 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:17:27 crc kubenswrapper[3561]: I1203 00:17:27.623631 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:17:27 crc kubenswrapper[3561]: I1203 00:17:27.623678 3561 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:17:27 crc kubenswrapper[3561]: I1203 00:17:27.624454 3561 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ffd7b60aaa4fceea735c7b0851d00a85fc76af1d7c20f8f90f8923adac5c0481"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:17:27 crc kubenswrapper[3561]: I1203 00:17:27.624633 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://ffd7b60aaa4fceea735c7b0851d00a85fc76af1d7c20f8f90f8923adac5c0481" gracePeriod=600 Dec 03 00:17:27 crc kubenswrapper[3561]: I1203 00:17:27.677612 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca" path="/var/lib/kubelet/pods/3ea6f4e1-ee53-4dfc-89cc-b02ee0573dca/volumes" Dec 03 00:17:28 crc kubenswrapper[3561]: I1203 00:17:28.285775 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" event={"ID":"4e5a00c5-3be4-4290-81ba-8cd496de3556","Type":"ContainerStarted","Data":"d2ce6d4ee6a9a23dd104663006d98745579c45b2fbef1b9b1a35772748a44107"} Dec 03 00:17:28 crc kubenswrapper[3561]: I1203 00:17:28.288820 3561 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="ffd7b60aaa4fceea735c7b0851d00a85fc76af1d7c20f8f90f8923adac5c0481" exitCode=0 Dec 03 00:17:28 crc kubenswrapper[3561]: I1203 00:17:28.288864 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"ffd7b60aaa4fceea735c7b0851d00a85fc76af1d7c20f8f90f8923adac5c0481"} Dec 03 00:17:28 crc kubenswrapper[3561]: I1203 00:17:28.288885 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"c1be71d42620bb5792bd0a7738661749d3c399fe14e4bda9a97196271f69d892"} Dec 03 00:17:28 crc kubenswrapper[3561]: I1203 00:17:28.288905 3561 scope.go:117] "RemoveContainer" containerID="15e97c832b1edd5118dd5b70cf73c62c293a622f94794b4b5fd4db37a2862e9f" Dec 03 00:17:28 crc kubenswrapper[3561]: I1203 00:17:28.310023 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" podStartSLOduration=3.309957734 podStartE2EDuration="3.309957734s" podCreationTimestamp="2025-12-03 00:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:17:28.306553518 +0000 UTC m=+647.086987796" watchObservedRunningTime="2025-12-03 00:17:28.309957734 +0000 UTC m=+647.090391992" Dec 03 00:17:29 crc kubenswrapper[3561]: I1203 00:17:29.299701 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:29 crc kubenswrapper[3561]: I1203 00:17:29.304246 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57bbb7f5cf-lltsn" Dec 03 00:17:38 crc kubenswrapper[3561]: I1203 00:17:38.611638 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mcgp6" Dec 03 00:17:41 crc kubenswrapper[3561]: I1203 00:17:41.514289 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:17:41 crc kubenswrapper[3561]: I1203 00:17:41.514691 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:17:41 crc kubenswrapper[3561]: I1203 00:17:41.514731 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:17:41 crc kubenswrapper[3561]: I1203 00:17:41.514750 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:17:41 crc kubenswrapper[3561]: I1203 00:17:41.514792 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:17:41 crc kubenswrapper[3561]: E1203 00:17:41.987367 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe\": container with ID starting with de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe not found: ID does not exist" containerID="de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe" Dec 03 00:17:41 crc kubenswrapper[3561]: I1203 00:17:41.987421 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe" err="rpc error: code = NotFound desc = could not find container \"de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe\": container with ID starting with de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe not found: ID does not exist" Dec 03 00:17:41 crc kubenswrapper[3561]: E1203 00:17:41.990523 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\": container with ID starting with 51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652 not found: ID does not exist" containerID="51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652" Dec 03 00:17:41 crc kubenswrapper[3561]: I1203 00:17:41.990611 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652" err="rpc error: code = NotFound desc = could not find container \"51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\": container with ID starting with 51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652 not found: ID does not exist" Dec 03 00:17:41 crc kubenswrapper[3561]: E1203 00:17:41.991061 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\": container with ID starting with cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9 not found: ID does not exist" containerID="cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9" Dec 03 00:17:41 crc kubenswrapper[3561]: I1203 00:17:41.991094 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9" err="rpc error: code = NotFound desc = could not find container \"cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\": container with ID starting with cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9 not found: ID does not exist" Dec 03 00:17:41 crc kubenswrapper[3561]: E1203 00:17:41.991477 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\": container with ID starting with 4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e not found: ID does not exist" containerID="4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e" Dec 03 00:17:41 crc kubenswrapper[3561]: I1203 00:17:41.991512 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e" err="rpc error: code = NotFound desc = could not find container \"4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\": container with ID starting with 4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e not found: ID does not exist" Dec 03 00:17:41 crc kubenswrapper[3561]: E1203 00:17:41.991973 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\": container with ID starting with 4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9 not found: ID does not exist" containerID="4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9" Dec 03 00:17:41 crc kubenswrapper[3561]: I1203 00:17:41.992003 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9" err="rpc error: code = NotFound desc = could not find container \"4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\": container with ID starting with 4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9 not found: ID does not exist" Dec 03 00:17:41 crc kubenswrapper[3561]: E1203 00:17:41.992338 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\": container with ID starting with 951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa not found: ID does not exist" containerID="951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa" Dec 03 00:17:41 crc kubenswrapper[3561]: I1203 00:17:41.992403 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa" err="rpc error: code = NotFound desc = could not find container \"951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\": container with ID starting with 951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa not found: ID does not exist" Dec 03 00:17:41 crc kubenswrapper[3561]: E1203 00:17:41.992811 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\": container with ID starting with 246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b not found: ID does not exist" containerID="246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b" Dec 03 00:17:41 crc kubenswrapper[3561]: I1203 00:17:41.992839 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b" err="rpc error: code = NotFound desc = could not find container \"246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\": container with ID starting with 246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b not found: ID does not exist" Dec 03 00:17:41 crc kubenswrapper[3561]: E1203 00:17:41.993191 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\": container with ID starting with 6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212 not found: ID does not exist" containerID="6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212" Dec 03 00:17:41 crc kubenswrapper[3561]: I1203 00:17:41.993222 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212" err="rpc error: code = NotFound desc = could not find container \"6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\": container with ID starting with 6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212 not found: ID does not exist" Dec 03 00:17:41 crc kubenswrapper[3561]: E1203 00:17:41.993592 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\": container with ID starting with 2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5 not found: ID does not exist" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Dec 03 00:17:41 crc kubenswrapper[3561]: I1203 00:17:41.993619 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" err="rpc error: code = NotFound desc = could not find container \"2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\": container with ID starting with 2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5 not found: ID does not exist" Dec 03 00:17:41 crc kubenswrapper[3561]: E1203 00:17:41.993953 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9\": container with ID starting with a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9 not found: ID does not exist" containerID="a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9" Dec 03 00:17:41 crc kubenswrapper[3561]: I1203 00:17:41.993983 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9" err="rpc error: code = NotFound desc = could not find container \"a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9\": container with ID starting with a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9 not found: ID does not exist" Dec 03 00:17:41 crc kubenswrapper[3561]: E1203 00:17:41.994388 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\": container with ID starting with c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6 not found: ID does not exist" containerID="c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6" Dec 03 00:17:41 crc kubenswrapper[3561]: I1203 00:17:41.994418 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6" err="rpc error: code = NotFound desc = could not find container \"c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\": container with ID starting with c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6 not found: ID does not exist" Dec 03 00:17:41 crc kubenswrapper[3561]: E1203 00:17:41.995280 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba\": container with ID starting with 0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba not found: ID does not exist" containerID="0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba" Dec 03 00:17:41 crc kubenswrapper[3561]: I1203 00:17:41.995307 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba" err="rpc error: code = NotFound desc = could not find container \"0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba\": container with ID starting with 0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba not found: ID does not exist" Dec 03 00:18:05 crc kubenswrapper[3561]: I1203 00:18:05.391698 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849697f58-scdmx"] Dec 03 00:18:05 crc kubenswrapper[3561]: I1203 00:18:05.393765 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" podUID="b43b4774-5a37-43cf-8696-3e2baca14524" containerName="route-controller-manager" containerID="cri-o://f7e2355afb12189c7ca521d8987f9ca9e2f9192866bc135a889b6928212de35b" gracePeriod=30 Dec 03 00:18:05 crc kubenswrapper[3561]: I1203 00:18:05.903192 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.125997 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b43b4774-5a37-43cf-8696-3e2baca14524-client-ca\") pod \"b43b4774-5a37-43cf-8696-3e2baca14524\" (UID: \"b43b4774-5a37-43cf-8696-3e2baca14524\") " Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.126060 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b43b4774-5a37-43cf-8696-3e2baca14524-serving-cert\") pod \"b43b4774-5a37-43cf-8696-3e2baca14524\" (UID: \"b43b4774-5a37-43cf-8696-3e2baca14524\") " Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.126088 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b43b4774-5a37-43cf-8696-3e2baca14524-config\") pod \"b43b4774-5a37-43cf-8696-3e2baca14524\" (UID: \"b43b4774-5a37-43cf-8696-3e2baca14524\") " Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.126518 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhjnc\" (UniqueName: \"kubernetes.io/projected/b43b4774-5a37-43cf-8696-3e2baca14524-kube-api-access-bhjnc\") pod \"b43b4774-5a37-43cf-8696-3e2baca14524\" (UID: \"b43b4774-5a37-43cf-8696-3e2baca14524\") " Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.127127 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b43b4774-5a37-43cf-8696-3e2baca14524-client-ca" (OuterVolumeSpecName: "client-ca") pod "b43b4774-5a37-43cf-8696-3e2baca14524" (UID: "b43b4774-5a37-43cf-8696-3e2baca14524"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.127486 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b43b4774-5a37-43cf-8696-3e2baca14524-config" (OuterVolumeSpecName: "config") pod "b43b4774-5a37-43cf-8696-3e2baca14524" (UID: "b43b4774-5a37-43cf-8696-3e2baca14524"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.132871 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43b4774-5a37-43cf-8696-3e2baca14524-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b43b4774-5a37-43cf-8696-3e2baca14524" (UID: "b43b4774-5a37-43cf-8696-3e2baca14524"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.132957 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b43b4774-5a37-43cf-8696-3e2baca14524-kube-api-access-bhjnc" (OuterVolumeSpecName: "kube-api-access-bhjnc") pod "b43b4774-5a37-43cf-8696-3e2baca14524" (UID: "b43b4774-5a37-43cf-8696-3e2baca14524"). InnerVolumeSpecName "kube-api-access-bhjnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.227605 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bhjnc\" (UniqueName: \"kubernetes.io/projected/b43b4774-5a37-43cf-8696-3e2baca14524-kube-api-access-bhjnc\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.227648 3561 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b43b4774-5a37-43cf-8696-3e2baca14524-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.227659 3561 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b43b4774-5a37-43cf-8696-3e2baca14524-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.227669 3561 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b43b4774-5a37-43cf-8696-3e2baca14524-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.494911 3561 generic.go:334] "Generic (PLEG): container finished" podID="b43b4774-5a37-43cf-8696-3e2baca14524" containerID="f7e2355afb12189c7ca521d8987f9ca9e2f9192866bc135a889b6928212de35b" exitCode=0 Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.494968 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" event={"ID":"b43b4774-5a37-43cf-8696-3e2baca14524","Type":"ContainerDied","Data":"f7e2355afb12189c7ca521d8987f9ca9e2f9192866bc135a889b6928212de35b"} Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.494985 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.495004 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-849697f58-scdmx" event={"ID":"b43b4774-5a37-43cf-8696-3e2baca14524","Type":"ContainerDied","Data":"54f92f70bb9df43e6d2859cf3536ec8eb8567941ec8e0c02bbb731efd6856dc9"} Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.495054 3561 scope.go:117] "RemoveContainer" containerID="f7e2355afb12189c7ca521d8987f9ca9e2f9192866bc135a889b6928212de35b" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.534637 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849697f58-scdmx"] Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.537663 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849697f58-scdmx"] Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.545769 3561 scope.go:117] "RemoveContainer" containerID="f7e2355afb12189c7ca521d8987f9ca9e2f9192866bc135a889b6928212de35b" Dec 03 00:18:06 crc kubenswrapper[3561]: E1203 00:18:06.547738 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e2355afb12189c7ca521d8987f9ca9e2f9192866bc135a889b6928212de35b\": container with ID starting with f7e2355afb12189c7ca521d8987f9ca9e2f9192866bc135a889b6928212de35b not found: ID does not exist" containerID="f7e2355afb12189c7ca521d8987f9ca9e2f9192866bc135a889b6928212de35b" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.547799 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e2355afb12189c7ca521d8987f9ca9e2f9192866bc135a889b6928212de35b"} err="failed to get container status \"f7e2355afb12189c7ca521d8987f9ca9e2f9192866bc135a889b6928212de35b\": rpc error: code = NotFound desc = could not find container \"f7e2355afb12189c7ca521d8987f9ca9e2f9192866bc135a889b6928212de35b\": container with ID starting with f7e2355afb12189c7ca521d8987f9ca9e2f9192866bc135a889b6928212de35b not found: ID does not exist" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.704741 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc"] Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.704851 3561 topology_manager.go:215] "Topology Admit Handler" podUID="9440c0a1-95fd-4bfb-b8f4-b15070d75d79" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-65b848fb98-dzpdc" Dec 03 00:18:06 crc kubenswrapper[3561]: E1203 00:18:06.705060 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b43b4774-5a37-43cf-8696-3e2baca14524" containerName="route-controller-manager" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.705081 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43b4774-5a37-43cf-8696-3e2baca14524" containerName="route-controller-manager" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.705211 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="b43b4774-5a37-43cf-8696-3e2baca14524" containerName="route-controller-manager" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.705731 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.708308 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.708526 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.709530 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.711782 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.712193 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.712616 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.726283 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc"] Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.934519 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9440c0a1-95fd-4bfb-b8f4-b15070d75d79-client-ca\") pod \"route-controller-manager-65b848fb98-dzpdc\" (UID: \"9440c0a1-95fd-4bfb-b8f4-b15070d75d79\") " pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.934851 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdv98\" (UniqueName: \"kubernetes.io/projected/9440c0a1-95fd-4bfb-b8f4-b15070d75d79-kube-api-access-jdv98\") pod \"route-controller-manager-65b848fb98-dzpdc\" (UID: \"9440c0a1-95fd-4bfb-b8f4-b15070d75d79\") " pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.934880 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9440c0a1-95fd-4bfb-b8f4-b15070d75d79-config\") pod \"route-controller-manager-65b848fb98-dzpdc\" (UID: \"9440c0a1-95fd-4bfb-b8f4-b15070d75d79\") " pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" Dec 03 00:18:06 crc kubenswrapper[3561]: I1203 00:18:06.934911 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9440c0a1-95fd-4bfb-b8f4-b15070d75d79-serving-cert\") pod \"route-controller-manager-65b848fb98-dzpdc\" (UID: \"9440c0a1-95fd-4bfb-b8f4-b15070d75d79\") " pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" Dec 03 00:18:07 crc kubenswrapper[3561]: I1203 00:18:07.039578 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9440c0a1-95fd-4bfb-b8f4-b15070d75d79-serving-cert\") pod \"route-controller-manager-65b848fb98-dzpdc\" (UID: \"9440c0a1-95fd-4bfb-b8f4-b15070d75d79\") " pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" Dec 03 00:18:07 crc kubenswrapper[3561]: I1203 00:18:07.039643 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9440c0a1-95fd-4bfb-b8f4-b15070d75d79-client-ca\") pod \"route-controller-manager-65b848fb98-dzpdc\" (UID: \"9440c0a1-95fd-4bfb-b8f4-b15070d75d79\") " pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" Dec 03 00:18:07 crc kubenswrapper[3561]: I1203 00:18:07.039677 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-jdv98\" (UniqueName: \"kubernetes.io/projected/9440c0a1-95fd-4bfb-b8f4-b15070d75d79-kube-api-access-jdv98\") pod \"route-controller-manager-65b848fb98-dzpdc\" (UID: \"9440c0a1-95fd-4bfb-b8f4-b15070d75d79\") " pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" Dec 03 00:18:07 crc kubenswrapper[3561]: I1203 00:18:07.039724 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9440c0a1-95fd-4bfb-b8f4-b15070d75d79-config\") pod \"route-controller-manager-65b848fb98-dzpdc\" (UID: \"9440c0a1-95fd-4bfb-b8f4-b15070d75d79\") " pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" Dec 03 00:18:07 crc kubenswrapper[3561]: I1203 00:18:07.041094 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9440c0a1-95fd-4bfb-b8f4-b15070d75d79-client-ca\") pod \"route-controller-manager-65b848fb98-dzpdc\" (UID: \"9440c0a1-95fd-4bfb-b8f4-b15070d75d79\") " pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" Dec 03 00:18:07 crc kubenswrapper[3561]: I1203 00:18:07.041917 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9440c0a1-95fd-4bfb-b8f4-b15070d75d79-config\") pod \"route-controller-manager-65b848fb98-dzpdc\" (UID: \"9440c0a1-95fd-4bfb-b8f4-b15070d75d79\") " pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" Dec 03 00:18:07 crc kubenswrapper[3561]: I1203 00:18:07.049490 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9440c0a1-95fd-4bfb-b8f4-b15070d75d79-serving-cert\") pod \"route-controller-manager-65b848fb98-dzpdc\" (UID: \"9440c0a1-95fd-4bfb-b8f4-b15070d75d79\") " pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" Dec 03 00:18:07 crc kubenswrapper[3561]: I1203 00:18:07.062187 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdv98\" (UniqueName: \"kubernetes.io/projected/9440c0a1-95fd-4bfb-b8f4-b15070d75d79-kube-api-access-jdv98\") pod \"route-controller-manager-65b848fb98-dzpdc\" (UID: \"9440c0a1-95fd-4bfb-b8f4-b15070d75d79\") " pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" Dec 03 00:18:07 crc kubenswrapper[3561]: I1203 00:18:07.328079 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" Dec 03 00:18:07 crc kubenswrapper[3561]: I1203 00:18:07.675161 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b43b4774-5a37-43cf-8696-3e2baca14524" path="/var/lib/kubelet/pods/b43b4774-5a37-43cf-8696-3e2baca14524/volumes" Dec 03 00:18:07 crc kubenswrapper[3561]: I1203 00:18:07.832558 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc"] Dec 03 00:18:08 crc kubenswrapper[3561]: I1203 00:18:08.507961 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" event={"ID":"9440c0a1-95fd-4bfb-b8f4-b15070d75d79","Type":"ContainerStarted","Data":"181df337d933306e471975926594592309ea27012e45abe6c69cf29507f4ef2e"} Dec 03 00:18:08 crc kubenswrapper[3561]: I1203 00:18:08.508298 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" Dec 03 00:18:08 crc kubenswrapper[3561]: I1203 00:18:08.508318 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" event={"ID":"9440c0a1-95fd-4bfb-b8f4-b15070d75d79","Type":"ContainerStarted","Data":"48dc48aa00d051f3e666c21940992acc17ba94846d70521ccbfde79969660427"} Dec 03 00:18:08 crc kubenswrapper[3561]: I1203 00:18:08.514490 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" Dec 03 00:18:08 crc kubenswrapper[3561]: I1203 00:18:08.548236 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-65b848fb98-dzpdc" podStartSLOduration=3.548159618 podStartE2EDuration="3.548159618s" podCreationTimestamp="2025-12-03 00:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:18:08.529931539 +0000 UTC m=+687.310365807" watchObservedRunningTime="2025-12-03 00:18:08.548159618 +0000 UTC m=+687.328593886" Dec 03 00:18:23 crc kubenswrapper[3561]: I1203 00:18:23.594279 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6mlm"] Dec 03 00:18:23 crc kubenswrapper[3561]: I1203 00:18:23.595011 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d6mlm" podUID="3b9c24fe-561f-4c69-b91e-ae8796e4d78f" containerName="registry-server" containerID="cri-o://a3ab76beeb1ca5cb44e7de532798cf17f01cfd0f2faef2c63569c0e3f68155b9" gracePeriod=30 Dec 03 00:18:23 crc kubenswrapper[3561]: I1203 00:18:23.972293 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6mlm" Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.165823 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-catalog-content\") pod \"3b9c24fe-561f-4c69-b91e-ae8796e4d78f\" (UID: \"3b9c24fe-561f-4c69-b91e-ae8796e4d78f\") " Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.165906 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rshmw\" (UniqueName: \"kubernetes.io/projected/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-kube-api-access-rshmw\") pod \"3b9c24fe-561f-4c69-b91e-ae8796e4d78f\" (UID: \"3b9c24fe-561f-4c69-b91e-ae8796e4d78f\") " Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.165946 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-utilities\") pod \"3b9c24fe-561f-4c69-b91e-ae8796e4d78f\" (UID: \"3b9c24fe-561f-4c69-b91e-ae8796e4d78f\") " Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.167630 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-utilities" (OuterVolumeSpecName: "utilities") pod "3b9c24fe-561f-4c69-b91e-ae8796e4d78f" (UID: "3b9c24fe-561f-4c69-b91e-ae8796e4d78f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.180441 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-kube-api-access-rshmw" (OuterVolumeSpecName: "kube-api-access-rshmw") pod "3b9c24fe-561f-4c69-b91e-ae8796e4d78f" (UID: "3b9c24fe-561f-4c69-b91e-ae8796e4d78f"). InnerVolumeSpecName "kube-api-access-rshmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.267005 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rshmw\" (UniqueName: \"kubernetes.io/projected/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-kube-api-access-rshmw\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.267058 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.351343 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b9c24fe-561f-4c69-b91e-ae8796e4d78f" (UID: "3b9c24fe-561f-4c69-b91e-ae8796e4d78f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.368758 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b9c24fe-561f-4c69-b91e-ae8796e4d78f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.598043 3561 generic.go:334] "Generic (PLEG): container finished" podID="3b9c24fe-561f-4c69-b91e-ae8796e4d78f" containerID="a3ab76beeb1ca5cb44e7de532798cf17f01cfd0f2faef2c63569c0e3f68155b9" exitCode=0 Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.598085 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6mlm" event={"ID":"3b9c24fe-561f-4c69-b91e-ae8796e4d78f","Type":"ContainerDied","Data":"a3ab76beeb1ca5cb44e7de532798cf17f01cfd0f2faef2c63569c0e3f68155b9"} Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.598094 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6mlm" Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.598116 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6mlm" event={"ID":"3b9c24fe-561f-4c69-b91e-ae8796e4d78f","Type":"ContainerDied","Data":"d3c007dd9147923a7ab95b2c282d46cbad8b4a0b6e843766f5db99954f3b0086"} Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.598137 3561 scope.go:117] "RemoveContainer" containerID="a3ab76beeb1ca5cb44e7de532798cf17f01cfd0f2faef2c63569c0e3f68155b9" Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.632022 3561 scope.go:117] "RemoveContainer" containerID="7712a159e821c900c783c774691cf65a1e4bbed7cac5eb4227d626d9cb4d6b4a" Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.632689 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6mlm"] Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.638445 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6mlm"] Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.673741 3561 scope.go:117] "RemoveContainer" containerID="70df6f2f7c0af65545adb8ec113d0aba4ac511a80288260f56efd131628a8f96" Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.704016 3561 scope.go:117] "RemoveContainer" containerID="a3ab76beeb1ca5cb44e7de532798cf17f01cfd0f2faef2c63569c0e3f68155b9" Dec 03 00:18:24 crc kubenswrapper[3561]: E1203 00:18:24.704579 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3ab76beeb1ca5cb44e7de532798cf17f01cfd0f2faef2c63569c0e3f68155b9\": container with ID starting with a3ab76beeb1ca5cb44e7de532798cf17f01cfd0f2faef2c63569c0e3f68155b9 not found: ID does not exist" containerID="a3ab76beeb1ca5cb44e7de532798cf17f01cfd0f2faef2c63569c0e3f68155b9" Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.704724 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3ab76beeb1ca5cb44e7de532798cf17f01cfd0f2faef2c63569c0e3f68155b9"} err="failed to get container status \"a3ab76beeb1ca5cb44e7de532798cf17f01cfd0f2faef2c63569c0e3f68155b9\": rpc error: code = NotFound desc = could not find container \"a3ab76beeb1ca5cb44e7de532798cf17f01cfd0f2faef2c63569c0e3f68155b9\": container with ID starting with a3ab76beeb1ca5cb44e7de532798cf17f01cfd0f2faef2c63569c0e3f68155b9 not found: ID does not exist" Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.704808 3561 scope.go:117] "RemoveContainer" containerID="7712a159e821c900c783c774691cf65a1e4bbed7cac5eb4227d626d9cb4d6b4a" Dec 03 00:18:24 crc kubenswrapper[3561]: E1203 00:18:24.705237 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7712a159e821c900c783c774691cf65a1e4bbed7cac5eb4227d626d9cb4d6b4a\": container with ID starting with 7712a159e821c900c783c774691cf65a1e4bbed7cac5eb4227d626d9cb4d6b4a not found: ID does not exist" containerID="7712a159e821c900c783c774691cf65a1e4bbed7cac5eb4227d626d9cb4d6b4a" Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.705286 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7712a159e821c900c783c774691cf65a1e4bbed7cac5eb4227d626d9cb4d6b4a"} err="failed to get container status \"7712a159e821c900c783c774691cf65a1e4bbed7cac5eb4227d626d9cb4d6b4a\": rpc error: code = NotFound desc = could not find container \"7712a159e821c900c783c774691cf65a1e4bbed7cac5eb4227d626d9cb4d6b4a\": container with ID starting with 7712a159e821c900c783c774691cf65a1e4bbed7cac5eb4227d626d9cb4d6b4a not found: ID does not exist" Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.705301 3561 scope.go:117] "RemoveContainer" containerID="70df6f2f7c0af65545adb8ec113d0aba4ac511a80288260f56efd131628a8f96" Dec 03 00:18:24 crc kubenswrapper[3561]: E1203 00:18:24.705614 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70df6f2f7c0af65545adb8ec113d0aba4ac511a80288260f56efd131628a8f96\": container with ID starting with 70df6f2f7c0af65545adb8ec113d0aba4ac511a80288260f56efd131628a8f96 not found: ID does not exist" containerID="70df6f2f7c0af65545adb8ec113d0aba4ac511a80288260f56efd131628a8f96" Dec 03 00:18:24 crc kubenswrapper[3561]: I1203 00:18:24.705728 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70df6f2f7c0af65545adb8ec113d0aba4ac511a80288260f56efd131628a8f96"} err="failed to get container status \"70df6f2f7c0af65545adb8ec113d0aba4ac511a80288260f56efd131628a8f96\": rpc error: code = NotFound desc = could not find container \"70df6f2f7c0af65545adb8ec113d0aba4ac511a80288260f56efd131628a8f96\": container with ID starting with 70df6f2f7c0af65545adb8ec113d0aba4ac511a80288260f56efd131628a8f96 not found: ID does not exist" Dec 03 00:18:25 crc kubenswrapper[3561]: I1203 00:18:25.672808 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b9c24fe-561f-4c69-b91e-ae8796e4d78f" path="/var/lib/kubelet/pods/3b9c24fe-561f-4c69-b91e-ae8796e4d78f/volumes" Dec 03 00:18:41 crc kubenswrapper[3561]: I1203 00:18:41.515610 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:18:41 crc kubenswrapper[3561]: I1203 00:18:41.516208 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:18:41 crc kubenswrapper[3561]: I1203 00:18:41.516241 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:18:41 crc kubenswrapper[3561]: I1203 00:18:41.516257 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:18:41 crc kubenswrapper[3561]: I1203 00:18:41.516276 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:18:41 crc kubenswrapper[3561]: I1203 00:18:41.998440 3561 scope.go:117] "RemoveContainer" containerID="50ec08c89e87b466ab5757bf580922cec8295d7eac3615fb0bcdb6f20c844ba9" Dec 03 00:19:27 crc kubenswrapper[3561]: I1203 00:19:27.622685 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:19:27 crc kubenswrapper[3561]: I1203 00:19:27.623521 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:19:41 crc kubenswrapper[3561]: I1203 00:19:41.517315 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:19:41 crc kubenswrapper[3561]: I1203 00:19:41.518018 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:19:41 crc kubenswrapper[3561]: I1203 00:19:41.518053 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:19:41 crc kubenswrapper[3561]: I1203 00:19:41.518072 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:19:41 crc kubenswrapper[3561]: I1203 00:19:41.518094 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.278325 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f7fkh"] Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.279042 3561 topology_manager.go:215] "Topology Admit Handler" podUID="0233ee5d-3504-4f2f-a5ed-a4ea595c4f46" podNamespace="openshift-marketplace" podName="redhat-operators-f7fkh" Dec 03 00:19:54 crc kubenswrapper[3561]: E1203 00:19:54.279236 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3b9c24fe-561f-4c69-b91e-ae8796e4d78f" containerName="extract-utilities" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.279255 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9c24fe-561f-4c69-b91e-ae8796e4d78f" containerName="extract-utilities" Dec 03 00:19:54 crc kubenswrapper[3561]: E1203 00:19:54.279272 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3b9c24fe-561f-4c69-b91e-ae8796e4d78f" containerName="registry-server" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.279282 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9c24fe-561f-4c69-b91e-ae8796e4d78f" containerName="registry-server" Dec 03 00:19:54 crc kubenswrapper[3561]: E1203 00:19:54.279294 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3b9c24fe-561f-4c69-b91e-ae8796e4d78f" containerName="extract-content" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.279302 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9c24fe-561f-4c69-b91e-ae8796e4d78f" containerName="extract-content" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.279437 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9c24fe-561f-4c69-b91e-ae8796e4d78f" containerName="registry-server" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.280316 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f7fkh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.291483 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h7wvh"] Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.291623 3561 topology_manager.go:215] "Topology Admit Handler" podUID="eb46ce68-4ab9-40e4-8bb4-12603a4cd384" podNamespace="openshift-marketplace" podName="certified-operators-h7wvh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.293003 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h7wvh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.295895 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f7fkh"] Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.310868 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h7wvh"] Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.379383 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x"] Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.379516 3561 topology_manager.go:215] "Topology Admit Handler" podUID="8141457f-4211-4f39-a116-f6d971976b48" podNamespace="openshift-marketplace" podName="8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.380694 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.384139 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm"] Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.384235 3561 topology_manager.go:215] "Topology Admit Handler" podUID="9730140c-48cc-4687-ba52-9049cf40283e" podNamespace="openshift-marketplace" podName="6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.385332 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.389259 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-4w6pc" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.395118 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm"] Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.402054 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x"] Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.471072 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-utilities\") pod \"redhat-operators-f7fkh\" (UID: \"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46\") " pod="openshift-marketplace/redhat-operators-f7fkh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.471147 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9730140c-48cc-4687-ba52-9049cf40283e-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm\" (UID: \"9730140c-48cc-4687-ba52-9049cf40283e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.471171 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9730140c-48cc-4687-ba52-9049cf40283e-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm\" (UID: \"9730140c-48cc-4687-ba52-9049cf40283e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.471190 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8141457f-4211-4f39-a116-f6d971976b48-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x\" (UID: \"8141457f-4211-4f39-a116-f6d971976b48\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.471225 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmw4r\" (UniqueName: \"kubernetes.io/projected/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-kube-api-access-tmw4r\") pod \"redhat-operators-f7fkh\" (UID: \"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46\") " pod="openshift-marketplace/redhat-operators-f7fkh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.471252 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-utilities\") pod \"certified-operators-h7wvh\" (UID: \"eb46ce68-4ab9-40e4-8bb4-12603a4cd384\") " pod="openshift-marketplace/certified-operators-h7wvh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.471277 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w628\" (UniqueName: \"kubernetes.io/projected/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-kube-api-access-4w628\") pod \"certified-operators-h7wvh\" (UID: \"eb46ce68-4ab9-40e4-8bb4-12603a4cd384\") " pod="openshift-marketplace/certified-operators-h7wvh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.471303 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmrxb\" (UniqueName: \"kubernetes.io/projected/9730140c-48cc-4687-ba52-9049cf40283e-kube-api-access-mmrxb\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm\" (UID: \"9730140c-48cc-4687-ba52-9049cf40283e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.471330 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-catalog-content\") pod \"certified-operators-h7wvh\" (UID: \"eb46ce68-4ab9-40e4-8bb4-12603a4cd384\") " pod="openshift-marketplace/certified-operators-h7wvh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.471354 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkt42\" (UniqueName: \"kubernetes.io/projected/8141457f-4211-4f39-a116-f6d971976b48-kube-api-access-jkt42\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x\" (UID: \"8141457f-4211-4f39-a116-f6d971976b48\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.471373 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-catalog-content\") pod \"redhat-operators-f7fkh\" (UID: \"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46\") " pod="openshift-marketplace/redhat-operators-f7fkh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.471393 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8141457f-4211-4f39-a116-f6d971976b48-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x\" (UID: \"8141457f-4211-4f39-a116-f6d971976b48\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.572552 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tmw4r\" (UniqueName: \"kubernetes.io/projected/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-kube-api-access-tmw4r\") pod \"redhat-operators-f7fkh\" (UID: \"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46\") " pod="openshift-marketplace/redhat-operators-f7fkh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.572625 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-utilities\") pod \"certified-operators-h7wvh\" (UID: \"eb46ce68-4ab9-40e4-8bb4-12603a4cd384\") " pod="openshift-marketplace/certified-operators-h7wvh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.572663 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w628\" (UniqueName: \"kubernetes.io/projected/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-kube-api-access-4w628\") pod \"certified-operators-h7wvh\" (UID: \"eb46ce68-4ab9-40e4-8bb4-12603a4cd384\") " pod="openshift-marketplace/certified-operators-h7wvh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.572712 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-mmrxb\" (UniqueName: \"kubernetes.io/projected/9730140c-48cc-4687-ba52-9049cf40283e-kube-api-access-mmrxb\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm\" (UID: \"9730140c-48cc-4687-ba52-9049cf40283e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.572754 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-catalog-content\") pod \"certified-operators-h7wvh\" (UID: \"eb46ce68-4ab9-40e4-8bb4-12603a4cd384\") " pod="openshift-marketplace/certified-operators-h7wvh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.572785 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-jkt42\" (UniqueName: \"kubernetes.io/projected/8141457f-4211-4f39-a116-f6d971976b48-kube-api-access-jkt42\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x\" (UID: \"8141457f-4211-4f39-a116-f6d971976b48\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.572811 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-catalog-content\") pod \"redhat-operators-f7fkh\" (UID: \"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46\") " pod="openshift-marketplace/redhat-operators-f7fkh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.572842 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8141457f-4211-4f39-a116-f6d971976b48-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x\" (UID: \"8141457f-4211-4f39-a116-f6d971976b48\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.572873 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-utilities\") pod \"redhat-operators-f7fkh\" (UID: \"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46\") " pod="openshift-marketplace/redhat-operators-f7fkh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.572916 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9730140c-48cc-4687-ba52-9049cf40283e-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm\" (UID: \"9730140c-48cc-4687-ba52-9049cf40283e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.572944 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9730140c-48cc-4687-ba52-9049cf40283e-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm\" (UID: \"9730140c-48cc-4687-ba52-9049cf40283e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.572975 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8141457f-4211-4f39-a116-f6d971976b48-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x\" (UID: \"8141457f-4211-4f39-a116-f6d971976b48\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.573594 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8141457f-4211-4f39-a116-f6d971976b48-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x\" (UID: \"8141457f-4211-4f39-a116-f6d971976b48\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.574156 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9730140c-48cc-4687-ba52-9049cf40283e-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm\" (UID: \"9730140c-48cc-4687-ba52-9049cf40283e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.575172 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-utilities\") pod \"certified-operators-h7wvh\" (UID: \"eb46ce68-4ab9-40e4-8bb4-12603a4cd384\") " pod="openshift-marketplace/certified-operators-h7wvh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.575834 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8141457f-4211-4f39-a116-f6d971976b48-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x\" (UID: \"8141457f-4211-4f39-a116-f6d971976b48\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.576052 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-catalog-content\") pod \"certified-operators-h7wvh\" (UID: \"eb46ce68-4ab9-40e4-8bb4-12603a4cd384\") " pod="openshift-marketplace/certified-operators-h7wvh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.576071 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-utilities\") pod \"redhat-operators-f7fkh\" (UID: \"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46\") " pod="openshift-marketplace/redhat-operators-f7fkh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.576145 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-catalog-content\") pod \"redhat-operators-f7fkh\" (UID: \"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46\") " pod="openshift-marketplace/redhat-operators-f7fkh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.576156 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9730140c-48cc-4687-ba52-9049cf40283e-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm\" (UID: \"9730140c-48cc-4687-ba52-9049cf40283e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.595221 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmrxb\" (UniqueName: \"kubernetes.io/projected/9730140c-48cc-4687-ba52-9049cf40283e-kube-api-access-mmrxb\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm\" (UID: \"9730140c-48cc-4687-ba52-9049cf40283e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.595757 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkt42\" (UniqueName: \"kubernetes.io/projected/8141457f-4211-4f39-a116-f6d971976b48-kube-api-access-jkt42\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x\" (UID: \"8141457f-4211-4f39-a116-f6d971976b48\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.595913 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmw4r\" (UniqueName: \"kubernetes.io/projected/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-kube-api-access-tmw4r\") pod \"redhat-operators-f7fkh\" (UID: \"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46\") " pod="openshift-marketplace/redhat-operators-f7fkh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.596619 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w628\" (UniqueName: \"kubernetes.io/projected/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-kube-api-access-4w628\") pod \"certified-operators-h7wvh\" (UID: \"eb46ce68-4ab9-40e4-8bb4-12603a4cd384\") " pod="openshift-marketplace/certified-operators-h7wvh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.604374 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f7fkh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.614915 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h7wvh" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.731809 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.756005 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.926845 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h7wvh"] Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.931979 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl"] Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.932105 3561 topology_manager.go:215] "Topology Admit Handler" podUID="b7b8992b-c566-4f5b-830e-b6754d5b0c22" podNamespace="openshift-marketplace" podName="6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.933041 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" Dec 03 00:19:54 crc kubenswrapper[3561]: I1203 00:19:54.944771 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl"] Dec 03 00:19:55 crc kubenswrapper[3561]: I1203 00:19:55.010462 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f7fkh"] Dec 03 00:19:55 crc kubenswrapper[3561]: I1203 00:19:55.082218 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7b8992b-c566-4f5b-830e-b6754d5b0c22-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl\" (UID: \"b7b8992b-c566-4f5b-830e-b6754d5b0c22\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" Dec 03 00:19:55 crc kubenswrapper[3561]: I1203 00:19:55.082278 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5clgk\" (UniqueName: \"kubernetes.io/projected/b7b8992b-c566-4f5b-830e-b6754d5b0c22-kube-api-access-5clgk\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl\" (UID: \"b7b8992b-c566-4f5b-830e-b6754d5b0c22\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" Dec 03 00:19:55 crc kubenswrapper[3561]: I1203 00:19:55.082323 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7b8992b-c566-4f5b-830e-b6754d5b0c22-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl\" (UID: \"b7b8992b-c566-4f5b-830e-b6754d5b0c22\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" Dec 03 00:19:55 crc kubenswrapper[3561]: I1203 00:19:55.098735 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7wvh" event={"ID":"eb46ce68-4ab9-40e4-8bb4-12603a4cd384","Type":"ContainerStarted","Data":"20aa2094ffb299883bee3a9dee4fa37465beebb3793819146127d46c445fafcb"} Dec 03 00:19:55 crc kubenswrapper[3561]: I1203 00:19:55.099509 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7fkh" event={"ID":"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46","Type":"ContainerStarted","Data":"d1d8d5658db6760af4ecf80514c0a616b614552b84db9a4e61b152906b415c1a"} Dec 03 00:19:55 crc kubenswrapper[3561]: I1203 00:19:55.135137 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm"] Dec 03 00:19:55 crc kubenswrapper[3561]: I1203 00:19:55.183886 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7b8992b-c566-4f5b-830e-b6754d5b0c22-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl\" (UID: \"b7b8992b-c566-4f5b-830e-b6754d5b0c22\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" Dec 03 00:19:55 crc kubenswrapper[3561]: I1203 00:19:55.183931 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5clgk\" (UniqueName: \"kubernetes.io/projected/b7b8992b-c566-4f5b-830e-b6754d5b0c22-kube-api-access-5clgk\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl\" (UID: \"b7b8992b-c566-4f5b-830e-b6754d5b0c22\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" Dec 03 00:19:55 crc kubenswrapper[3561]: I1203 00:19:55.183968 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7b8992b-c566-4f5b-830e-b6754d5b0c22-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl\" (UID: \"b7b8992b-c566-4f5b-830e-b6754d5b0c22\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" Dec 03 00:19:55 crc kubenswrapper[3561]: I1203 00:19:55.184345 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7b8992b-c566-4f5b-830e-b6754d5b0c22-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl\" (UID: \"b7b8992b-c566-4f5b-830e-b6754d5b0c22\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" Dec 03 00:19:55 crc kubenswrapper[3561]: I1203 00:19:55.184554 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7b8992b-c566-4f5b-830e-b6754d5b0c22-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl\" (UID: \"b7b8992b-c566-4f5b-830e-b6754d5b0c22\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" Dec 03 00:19:55 crc kubenswrapper[3561]: I1203 00:19:55.199505 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x"] Dec 03 00:19:55 crc kubenswrapper[3561]: I1203 00:19:55.214650 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5clgk\" (UniqueName: \"kubernetes.io/projected/b7b8992b-c566-4f5b-830e-b6754d5b0c22-kube-api-access-5clgk\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl\" (UID: \"b7b8992b-c566-4f5b-830e-b6754d5b0c22\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" Dec 03 00:19:55 crc kubenswrapper[3561]: I1203 00:19:55.257776 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" Dec 03 00:19:55 crc kubenswrapper[3561]: I1203 00:19:55.568263 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl"] Dec 03 00:19:56 crc kubenswrapper[3561]: I1203 00:19:56.104977 3561 generic.go:334] "Generic (PLEG): container finished" podID="0233ee5d-3504-4f2f-a5ed-a4ea595c4f46" containerID="2e5c9f49e6895fd6932f13fa4ad8975cd4678dd5ef1569ddc7a957c6de14f097" exitCode=0 Dec 03 00:19:56 crc kubenswrapper[3561]: I1203 00:19:56.105231 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7fkh" event={"ID":"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46","Type":"ContainerDied","Data":"2e5c9f49e6895fd6932f13fa4ad8975cd4678dd5ef1569ddc7a957c6de14f097"} Dec 03 00:19:56 crc kubenswrapper[3561]: I1203 00:19:56.106528 3561 generic.go:334] "Generic (PLEG): container finished" podID="b7b8992b-c566-4f5b-830e-b6754d5b0c22" containerID="5dc51ab117caf8a8bc99abc8b57c0c36a025f7a30598b930550fe03446617163" exitCode=0 Dec 03 00:19:56 crc kubenswrapper[3561]: I1203 00:19:56.106560 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" event={"ID":"b7b8992b-c566-4f5b-830e-b6754d5b0c22","Type":"ContainerDied","Data":"5dc51ab117caf8a8bc99abc8b57c0c36a025f7a30598b930550fe03446617163"} Dec 03 00:19:56 crc kubenswrapper[3561]: I1203 00:19:56.106596 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" event={"ID":"b7b8992b-c566-4f5b-830e-b6754d5b0c22","Type":"ContainerStarted","Data":"df73ee9a82adecade0d2c69489b3d21b0583c0e7a39dba001f1f7bb02fa1d597"} Dec 03 00:19:56 crc kubenswrapper[3561]: I1203 00:19:56.106707 3561 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 00:19:56 crc kubenswrapper[3561]: I1203 00:19:56.107787 3561 generic.go:334] "Generic (PLEG): container finished" podID="9730140c-48cc-4687-ba52-9049cf40283e" containerID="a02783940ecf3f2853683eb1c5c76c844fec5e70224cc223a67b9cd668733ba1" exitCode=0 Dec 03 00:19:56 crc kubenswrapper[3561]: I1203 00:19:56.107830 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" event={"ID":"9730140c-48cc-4687-ba52-9049cf40283e","Type":"ContainerDied","Data":"a02783940ecf3f2853683eb1c5c76c844fec5e70224cc223a67b9cd668733ba1"} Dec 03 00:19:56 crc kubenswrapper[3561]: I1203 00:19:56.107849 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" event={"ID":"9730140c-48cc-4687-ba52-9049cf40283e","Type":"ContainerStarted","Data":"75bdb83b5d17e875434399c21e32ebec61d8c038e2adf3d67c6b2354f7581d17"} Dec 03 00:19:56 crc kubenswrapper[3561]: I1203 00:19:56.111493 3561 generic.go:334] "Generic (PLEG): container finished" podID="8141457f-4211-4f39-a116-f6d971976b48" containerID="8c8f38ebe0195a21145d0061adcb138a99c0e9db07536cbc82985778b3e6d59a" exitCode=0 Dec 03 00:19:56 crc kubenswrapper[3561]: I1203 00:19:56.111574 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" event={"ID":"8141457f-4211-4f39-a116-f6d971976b48","Type":"ContainerDied","Data":"8c8f38ebe0195a21145d0061adcb138a99c0e9db07536cbc82985778b3e6d59a"} Dec 03 00:19:56 crc kubenswrapper[3561]: I1203 00:19:56.111595 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" event={"ID":"8141457f-4211-4f39-a116-f6d971976b48","Type":"ContainerStarted","Data":"cfa37f793890b5cb04e7631e52655ead8bcc1e69f02498f077d143b2f579c0e5"} Dec 03 00:19:56 crc kubenswrapper[3561]: I1203 00:19:56.113623 3561 generic.go:334] "Generic (PLEG): container finished" podID="eb46ce68-4ab9-40e4-8bb4-12603a4cd384" containerID="750f1f19ba18cbc9db3c0256fa35b923d9bf8ee6be44ef6c6d0c602b12fe5dd4" exitCode=0 Dec 03 00:19:56 crc kubenswrapper[3561]: I1203 00:19:56.113641 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7wvh" event={"ID":"eb46ce68-4ab9-40e4-8bb4-12603a4cd384","Type":"ContainerDied","Data":"750f1f19ba18cbc9db3c0256fa35b923d9bf8ee6be44ef6c6d0c602b12fe5dd4"} Dec 03 00:19:57 crc kubenswrapper[3561]: I1203 00:19:57.126831 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7wvh" event={"ID":"eb46ce68-4ab9-40e4-8bb4-12603a4cd384","Type":"ContainerStarted","Data":"98235811dfac9c80417076cf85e66df472f5a4abf975567a27c1c5448cc90b74"} Dec 03 00:19:57 crc kubenswrapper[3561]: I1203 00:19:57.130846 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7fkh" event={"ID":"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46","Type":"ContainerStarted","Data":"466057ed07a08803610cefcb3908f83fcb16e15226ad63aa595c27acdbc6a82f"} Dec 03 00:19:57 crc kubenswrapper[3561]: I1203 00:19:57.623986 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:19:57 crc kubenswrapper[3561]: I1203 00:19:57.624383 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:19:58 crc kubenswrapper[3561]: I1203 00:19:58.146838 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" event={"ID":"8141457f-4211-4f39-a116-f6d971976b48","Type":"ContainerStarted","Data":"9f836eb7e0555db706003a1e1fe0c1a0588a0b3bce30a7cc33e604ca9d66869b"} Dec 03 00:19:58 crc kubenswrapper[3561]: E1203 00:19:58.486458 3561 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8141457f_4211_4f39_a116_f6d971976b48.slice/crio-conmon-9f836eb7e0555db706003a1e1fe0c1a0588a0b3bce30a7cc33e604ca9d66869b.scope\": RecentStats: unable to find data in memory cache]" Dec 03 00:19:59 crc kubenswrapper[3561]: I1203 00:19:59.169316 3561 generic.go:334] "Generic (PLEG): container finished" podID="8141457f-4211-4f39-a116-f6d971976b48" containerID="9f836eb7e0555db706003a1e1fe0c1a0588a0b3bce30a7cc33e604ca9d66869b" exitCode=0 Dec 03 00:19:59 crc kubenswrapper[3561]: I1203 00:19:59.169385 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" event={"ID":"8141457f-4211-4f39-a116-f6d971976b48","Type":"ContainerDied","Data":"9f836eb7e0555db706003a1e1fe0c1a0588a0b3bce30a7cc33e604ca9d66869b"} Dec 03 00:20:00 crc kubenswrapper[3561]: I1203 00:20:00.183221 3561 generic.go:334] "Generic (PLEG): container finished" podID="9730140c-48cc-4687-ba52-9049cf40283e" containerID="83473b96dd519fa2c4ca78d079066bd27299ae1d41c4c2d58ab6efaf8fd4b1ad" exitCode=0 Dec 03 00:20:00 crc kubenswrapper[3561]: I1203 00:20:00.183334 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" event={"ID":"9730140c-48cc-4687-ba52-9049cf40283e","Type":"ContainerDied","Data":"83473b96dd519fa2c4ca78d079066bd27299ae1d41c4c2d58ab6efaf8fd4b1ad"} Dec 03 00:20:00 crc kubenswrapper[3561]: I1203 00:20:00.196084 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" event={"ID":"8141457f-4211-4f39-a116-f6d971976b48","Type":"ContainerStarted","Data":"fc515903cefc98c2f41f3fcb5699ee24d1e064b5f13297bd1a4f64e9737ee916"} Dec 03 00:20:00 crc kubenswrapper[3561]: I1203 00:20:00.200614 3561 generic.go:334] "Generic (PLEG): container finished" podID="eb46ce68-4ab9-40e4-8bb4-12603a4cd384" containerID="98235811dfac9c80417076cf85e66df472f5a4abf975567a27c1c5448cc90b74" exitCode=0 Dec 03 00:20:00 crc kubenswrapper[3561]: I1203 00:20:00.200696 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7wvh" event={"ID":"eb46ce68-4ab9-40e4-8bb4-12603a4cd384","Type":"ContainerDied","Data":"98235811dfac9c80417076cf85e66df472f5a4abf975567a27c1c5448cc90b74"} Dec 03 00:20:00 crc kubenswrapper[3561]: I1203 00:20:00.242260 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" event={"ID":"b7b8992b-c566-4f5b-830e-b6754d5b0c22","Type":"ContainerStarted","Data":"407244bcbbcf2e9210e48c146e88036661db127bd8303e0bc3168da498d93b07"} Dec 03 00:20:00 crc kubenswrapper[3561]: I1203 00:20:00.267069 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" podStartSLOduration=5.17648047 podStartE2EDuration="6.266975979s" podCreationTimestamp="2025-12-03 00:19:54 +0000 UTC" firstStartedPulling="2025-12-03 00:19:56.116225138 +0000 UTC m=+794.896659396" lastFinishedPulling="2025-12-03 00:19:57.206720657 +0000 UTC m=+795.987154905" observedRunningTime="2025-12-03 00:20:00.264265474 +0000 UTC m=+799.044699742" watchObservedRunningTime="2025-12-03 00:20:00.266975979 +0000 UTC m=+799.047410317" Dec 03 00:20:01 crc kubenswrapper[3561]: I1203 00:20:01.295816 3561 generic.go:334] "Generic (PLEG): container finished" podID="8141457f-4211-4f39-a116-f6d971976b48" containerID="fc515903cefc98c2f41f3fcb5699ee24d1e064b5f13297bd1a4f64e9737ee916" exitCode=0 Dec 03 00:20:01 crc kubenswrapper[3561]: I1203 00:20:01.296123 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" event={"ID":"8141457f-4211-4f39-a116-f6d971976b48","Type":"ContainerDied","Data":"fc515903cefc98c2f41f3fcb5699ee24d1e064b5f13297bd1a4f64e9737ee916"} Dec 03 00:20:01 crc kubenswrapper[3561]: I1203 00:20:01.298620 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7wvh" event={"ID":"eb46ce68-4ab9-40e4-8bb4-12603a4cd384","Type":"ContainerStarted","Data":"9f361f0f451ca99748606194700ed45533922f4da62a107e793183b70dbe5337"} Dec 03 00:20:01 crc kubenswrapper[3561]: I1203 00:20:01.302697 3561 generic.go:334] "Generic (PLEG): container finished" podID="b7b8992b-c566-4f5b-830e-b6754d5b0c22" containerID="407244bcbbcf2e9210e48c146e88036661db127bd8303e0bc3168da498d93b07" exitCode=0 Dec 03 00:20:01 crc kubenswrapper[3561]: I1203 00:20:01.302786 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" event={"ID":"b7b8992b-c566-4f5b-830e-b6754d5b0c22","Type":"ContainerDied","Data":"407244bcbbcf2e9210e48c146e88036661db127bd8303e0bc3168da498d93b07"} Dec 03 00:20:01 crc kubenswrapper[3561]: I1203 00:20:01.302896 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" event={"ID":"b7b8992b-c566-4f5b-830e-b6754d5b0c22","Type":"ContainerStarted","Data":"e1019bd4edc4245c11605ffb62caf925c6c5ac2169ca415794ade4c6bb5fab04"} Dec 03 00:20:01 crc kubenswrapper[3561]: I1203 00:20:01.305036 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" event={"ID":"9730140c-48cc-4687-ba52-9049cf40283e","Type":"ContainerStarted","Data":"c17a9f50350263c027ecec5f7fc1150f8d32f46443923bb56bea745541b4d92a"} Dec 03 00:20:01 crc kubenswrapper[3561]: I1203 00:20:01.385796 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" podStartSLOduration=3.930453171 podStartE2EDuration="7.385729619s" podCreationTimestamp="2025-12-03 00:19:54 +0000 UTC" firstStartedPulling="2025-12-03 00:19:56.107456945 +0000 UTC m=+794.887891203" lastFinishedPulling="2025-12-03 00:19:59.562733393 +0000 UTC m=+798.343167651" observedRunningTime="2025-12-03 00:20:01.382770717 +0000 UTC m=+800.163204985" watchObservedRunningTime="2025-12-03 00:20:01.385729619 +0000 UTC m=+800.166163907" Dec 03 00:20:01 crc kubenswrapper[3561]: I1203 00:20:01.400770 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" podStartSLOduration=3.986621292 podStartE2EDuration="7.400679215s" podCreationTimestamp="2025-12-03 00:19:54 +0000 UTC" firstStartedPulling="2025-12-03 00:19:56.108935781 +0000 UTC m=+794.889370039" lastFinishedPulling="2025-12-03 00:19:59.522993704 +0000 UTC m=+798.303427962" observedRunningTime="2025-12-03 00:20:01.399895671 +0000 UTC m=+800.180329929" watchObservedRunningTime="2025-12-03 00:20:01.400679215 +0000 UTC m=+800.181113513" Dec 03 00:20:02 crc kubenswrapper[3561]: I1203 00:20:02.332770 3561 generic.go:334] "Generic (PLEG): container finished" podID="b7b8992b-c566-4f5b-830e-b6754d5b0c22" containerID="e1019bd4edc4245c11605ffb62caf925c6c5ac2169ca415794ade4c6bb5fab04" exitCode=0 Dec 03 00:20:02 crc kubenswrapper[3561]: I1203 00:20:02.332989 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" event={"ID":"b7b8992b-c566-4f5b-830e-b6754d5b0c22","Type":"ContainerDied","Data":"e1019bd4edc4245c11605ffb62caf925c6c5ac2169ca415794ade4c6bb5fab04"} Dec 03 00:20:02 crc kubenswrapper[3561]: I1203 00:20:02.397007 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h7wvh" podStartSLOduration=3.97596739 podStartE2EDuration="8.396932445s" podCreationTimestamp="2025-12-03 00:19:54 +0000 UTC" firstStartedPulling="2025-12-03 00:19:56.116649842 +0000 UTC m=+794.897084100" lastFinishedPulling="2025-12-03 00:20:00.537614897 +0000 UTC m=+799.318049155" observedRunningTime="2025-12-03 00:20:02.391537206 +0000 UTC m=+801.171971474" watchObservedRunningTime="2025-12-03 00:20:02.396932445 +0000 UTC m=+801.177366713" Dec 03 00:20:02 crc kubenswrapper[3561]: I1203 00:20:02.765339 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" Dec 03 00:20:02 crc kubenswrapper[3561]: I1203 00:20:02.834289 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkt42\" (UniqueName: \"kubernetes.io/projected/8141457f-4211-4f39-a116-f6d971976b48-kube-api-access-jkt42\") pod \"8141457f-4211-4f39-a116-f6d971976b48\" (UID: \"8141457f-4211-4f39-a116-f6d971976b48\") " Dec 03 00:20:02 crc kubenswrapper[3561]: I1203 00:20:02.834360 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8141457f-4211-4f39-a116-f6d971976b48-bundle\") pod \"8141457f-4211-4f39-a116-f6d971976b48\" (UID: \"8141457f-4211-4f39-a116-f6d971976b48\") " Dec 03 00:20:02 crc kubenswrapper[3561]: I1203 00:20:02.834422 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8141457f-4211-4f39-a116-f6d971976b48-util\") pod \"8141457f-4211-4f39-a116-f6d971976b48\" (UID: \"8141457f-4211-4f39-a116-f6d971976b48\") " Dec 03 00:20:02 crc kubenswrapper[3561]: I1203 00:20:02.836108 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8141457f-4211-4f39-a116-f6d971976b48-bundle" (OuterVolumeSpecName: "bundle") pod "8141457f-4211-4f39-a116-f6d971976b48" (UID: "8141457f-4211-4f39-a116-f6d971976b48"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:02 crc kubenswrapper[3561]: I1203 00:20:02.854920 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8141457f-4211-4f39-a116-f6d971976b48-kube-api-access-jkt42" (OuterVolumeSpecName: "kube-api-access-jkt42") pod "8141457f-4211-4f39-a116-f6d971976b48" (UID: "8141457f-4211-4f39-a116-f6d971976b48"). InnerVolumeSpecName "kube-api-access-jkt42". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:20:02 crc kubenswrapper[3561]: I1203 00:20:02.935516 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jkt42\" (UniqueName: \"kubernetes.io/projected/8141457f-4211-4f39-a116-f6d971976b48-kube-api-access-jkt42\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:02 crc kubenswrapper[3561]: I1203 00:20:02.935569 3561 reconciler_common.go:300] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8141457f-4211-4f39-a116-f6d971976b48-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:03 crc kubenswrapper[3561]: I1203 00:20:03.340850 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" Dec 03 00:20:03 crc kubenswrapper[3561]: I1203 00:20:03.341263 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x" event={"ID":"8141457f-4211-4f39-a116-f6d971976b48","Type":"ContainerDied","Data":"cfa37f793890b5cb04e7631e52655ead8bcc1e69f02498f077d143b2f579c0e5"} Dec 03 00:20:03 crc kubenswrapper[3561]: I1203 00:20:03.341289 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfa37f793890b5cb04e7631e52655ead8bcc1e69f02498f077d143b2f579c0e5" Dec 03 00:20:03 crc kubenswrapper[3561]: I1203 00:20:03.755501 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" Dec 03 00:20:03 crc kubenswrapper[3561]: I1203 00:20:03.882214 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5clgk\" (UniqueName: \"kubernetes.io/projected/b7b8992b-c566-4f5b-830e-b6754d5b0c22-kube-api-access-5clgk\") pod \"b7b8992b-c566-4f5b-830e-b6754d5b0c22\" (UID: \"b7b8992b-c566-4f5b-830e-b6754d5b0c22\") " Dec 03 00:20:03 crc kubenswrapper[3561]: I1203 00:20:03.882282 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7b8992b-c566-4f5b-830e-b6754d5b0c22-util\") pod \"b7b8992b-c566-4f5b-830e-b6754d5b0c22\" (UID: \"b7b8992b-c566-4f5b-830e-b6754d5b0c22\") " Dec 03 00:20:03 crc kubenswrapper[3561]: I1203 00:20:03.882395 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7b8992b-c566-4f5b-830e-b6754d5b0c22-bundle\") pod \"b7b8992b-c566-4f5b-830e-b6754d5b0c22\" (UID: \"b7b8992b-c566-4f5b-830e-b6754d5b0c22\") " Dec 03 00:20:03 crc kubenswrapper[3561]: I1203 00:20:03.883161 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7b8992b-c566-4f5b-830e-b6754d5b0c22-bundle" (OuterVolumeSpecName: "bundle") pod "b7b8992b-c566-4f5b-830e-b6754d5b0c22" (UID: "b7b8992b-c566-4f5b-830e-b6754d5b0c22"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:03 crc kubenswrapper[3561]: I1203 00:20:03.886346 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b8992b-c566-4f5b-830e-b6754d5b0c22-kube-api-access-5clgk" (OuterVolumeSpecName: "kube-api-access-5clgk") pod "b7b8992b-c566-4f5b-830e-b6754d5b0c22" (UID: "b7b8992b-c566-4f5b-830e-b6754d5b0c22"). InnerVolumeSpecName "kube-api-access-5clgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:20:03 crc kubenswrapper[3561]: I1203 00:20:03.984296 3561 reconciler_common.go:300] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7b8992b-c566-4f5b-830e-b6754d5b0c22-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:03 crc kubenswrapper[3561]: I1203 00:20:03.984350 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5clgk\" (UniqueName: \"kubernetes.io/projected/b7b8992b-c566-4f5b-830e-b6754d5b0c22-kube-api-access-5clgk\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:04 crc kubenswrapper[3561]: I1203 00:20:04.346323 3561 generic.go:334] "Generic (PLEG): container finished" podID="9730140c-48cc-4687-ba52-9049cf40283e" containerID="c17a9f50350263c027ecec5f7fc1150f8d32f46443923bb56bea745541b4d92a" exitCode=0 Dec 03 00:20:04 crc kubenswrapper[3561]: I1203 00:20:04.346393 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" event={"ID":"9730140c-48cc-4687-ba52-9049cf40283e","Type":"ContainerDied","Data":"c17a9f50350263c027ecec5f7fc1150f8d32f46443923bb56bea745541b4d92a"} Dec 03 00:20:04 crc kubenswrapper[3561]: I1203 00:20:04.349623 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" event={"ID":"b7b8992b-c566-4f5b-830e-b6754d5b0c22","Type":"ContainerDied","Data":"df73ee9a82adecade0d2c69489b3d21b0583c0e7a39dba001f1f7bb02fa1d597"} Dec 03 00:20:04 crc kubenswrapper[3561]: I1203 00:20:04.349650 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df73ee9a82adecade0d2c69489b3d21b0583c0e7a39dba001f1f7bb02fa1d597" Dec 03 00:20:04 crc kubenswrapper[3561]: I1203 00:20:04.349666 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl" Dec 03 00:20:04 crc kubenswrapper[3561]: I1203 00:20:04.668949 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h7wvh" Dec 03 00:20:04 crc kubenswrapper[3561]: I1203 00:20:04.669709 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h7wvh" Dec 03 00:20:04 crc kubenswrapper[3561]: I1203 00:20:04.817584 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h7wvh" Dec 03 00:20:05 crc kubenswrapper[3561]: I1203 00:20:05.608305 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" Dec 03 00:20:05 crc kubenswrapper[3561]: I1203 00:20:05.752473 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmrxb\" (UniqueName: \"kubernetes.io/projected/9730140c-48cc-4687-ba52-9049cf40283e-kube-api-access-mmrxb\") pod \"9730140c-48cc-4687-ba52-9049cf40283e\" (UID: \"9730140c-48cc-4687-ba52-9049cf40283e\") " Dec 03 00:20:05 crc kubenswrapper[3561]: I1203 00:20:05.752815 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9730140c-48cc-4687-ba52-9049cf40283e-util\") pod \"9730140c-48cc-4687-ba52-9049cf40283e\" (UID: \"9730140c-48cc-4687-ba52-9049cf40283e\") " Dec 03 00:20:05 crc kubenswrapper[3561]: I1203 00:20:05.763747 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9730140c-48cc-4687-ba52-9049cf40283e-bundle\") pod \"9730140c-48cc-4687-ba52-9049cf40283e\" (UID: \"9730140c-48cc-4687-ba52-9049cf40283e\") " Dec 03 00:20:05 crc kubenswrapper[3561]: I1203 00:20:05.766304 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9730140c-48cc-4687-ba52-9049cf40283e-kube-api-access-mmrxb" (OuterVolumeSpecName: "kube-api-access-mmrxb") pod "9730140c-48cc-4687-ba52-9049cf40283e" (UID: "9730140c-48cc-4687-ba52-9049cf40283e"). InnerVolumeSpecName "kube-api-access-mmrxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:20:05 crc kubenswrapper[3561]: I1203 00:20:05.865575 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mmrxb\" (UniqueName: \"kubernetes.io/projected/9730140c-48cc-4687-ba52-9049cf40283e-kube-api-access-mmrxb\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:06 crc kubenswrapper[3561]: I1203 00:20:06.361304 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" event={"ID":"9730140c-48cc-4687-ba52-9049cf40283e","Type":"ContainerDied","Data":"75bdb83b5d17e875434399c21e32ebec61d8c038e2adf3d67c6b2354f7581d17"} Dec 03 00:20:06 crc kubenswrapper[3561]: I1203 00:20:06.361493 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75bdb83b5d17e875434399c21e32ebec61d8c038e2adf3d67c6b2354f7581d17" Dec 03 00:20:06 crc kubenswrapper[3561]: I1203 00:20:06.361725 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm" Dec 03 00:20:06 crc kubenswrapper[3561]: I1203 00:20:06.535953 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h7wvh" Dec 03 00:20:06 crc kubenswrapper[3561]: I1203 00:20:06.601589 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h7wvh"] Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.868678 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-7b75f466d4-p9zdh"] Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.868998 3561 topology_manager.go:215] "Topology Admit Handler" podUID="af32daf5-85c5-4180-b117-f3e012a60f99" podNamespace="service-telemetry" podName="interconnect-operator-7b75f466d4-p9zdh" Dec 03 00:20:07 crc kubenswrapper[3561]: E1203 00:20:07.869157 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9730140c-48cc-4687-ba52-9049cf40283e" containerName="util" Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.869169 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="9730140c-48cc-4687-ba52-9049cf40283e" containerName="util" Dec 03 00:20:07 crc kubenswrapper[3561]: E1203 00:20:07.869180 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b7b8992b-c566-4f5b-830e-b6754d5b0c22" containerName="util" Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.869186 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b8992b-c566-4f5b-830e-b6754d5b0c22" containerName="util" Dec 03 00:20:07 crc kubenswrapper[3561]: E1203 00:20:07.869193 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b7b8992b-c566-4f5b-830e-b6754d5b0c22" containerName="pull" Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.869200 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b8992b-c566-4f5b-830e-b6754d5b0c22" containerName="pull" Dec 03 00:20:07 crc kubenswrapper[3561]: E1203 00:20:07.869211 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9730140c-48cc-4687-ba52-9049cf40283e" containerName="pull" Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.869219 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="9730140c-48cc-4687-ba52-9049cf40283e" containerName="pull" Dec 03 00:20:07 crc kubenswrapper[3561]: E1203 00:20:07.869232 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8141457f-4211-4f39-a116-f6d971976b48" containerName="util" Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.869240 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="8141457f-4211-4f39-a116-f6d971976b48" containerName="util" Dec 03 00:20:07 crc kubenswrapper[3561]: E1203 00:20:07.869251 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8141457f-4211-4f39-a116-f6d971976b48" containerName="pull" Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.869258 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="8141457f-4211-4f39-a116-f6d971976b48" containerName="pull" Dec 03 00:20:07 crc kubenswrapper[3561]: E1203 00:20:07.869271 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b7b8992b-c566-4f5b-830e-b6754d5b0c22" containerName="extract" Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.869279 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b8992b-c566-4f5b-830e-b6754d5b0c22" containerName="extract" Dec 03 00:20:07 crc kubenswrapper[3561]: E1203 00:20:07.869290 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8141457f-4211-4f39-a116-f6d971976b48" containerName="extract" Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.869295 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="8141457f-4211-4f39-a116-f6d971976b48" containerName="extract" Dec 03 00:20:07 crc kubenswrapper[3561]: E1203 00:20:07.869305 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9730140c-48cc-4687-ba52-9049cf40283e" containerName="extract" Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.869311 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="9730140c-48cc-4687-ba52-9049cf40283e" containerName="extract" Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.869423 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="8141457f-4211-4f39-a116-f6d971976b48" containerName="extract" Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.869437 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b8992b-c566-4f5b-830e-b6754d5b0c22" containerName="extract" Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.869447 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="9730140c-48cc-4687-ba52-9049cf40283e" containerName="extract" Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.871407 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-7b75f466d4-p9zdh" Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.874514 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-fdngn" Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.874885 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.875122 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.891653 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-7b75f466d4-p9zdh"] Dec 03 00:20:07 crc kubenswrapper[3561]: I1203 00:20:07.964702 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km7k6\" (UniqueName: \"kubernetes.io/projected/af32daf5-85c5-4180-b117-f3e012a60f99-kube-api-access-km7k6\") pod \"interconnect-operator-7b75f466d4-p9zdh\" (UID: \"af32daf5-85c5-4180-b117-f3e012a60f99\") " pod="service-telemetry/interconnect-operator-7b75f466d4-p9zdh" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.065857 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-km7k6\" (UniqueName: \"kubernetes.io/projected/af32daf5-85c5-4180-b117-f3e012a60f99-kube-api-access-km7k6\") pod \"interconnect-operator-7b75f466d4-p9zdh\" (UID: \"af32daf5-85c5-4180-b117-f3e012a60f99\") " pod="service-telemetry/interconnect-operator-7b75f466d4-p9zdh" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.121405 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-km7k6\" (UniqueName: \"kubernetes.io/projected/af32daf5-85c5-4180-b117-f3e012a60f99-kube-api-access-km7k6\") pod \"interconnect-operator-7b75f466d4-p9zdh\" (UID: \"af32daf5-85c5-4180-b117-f3e012a60f99\") " pod="service-telemetry/interconnect-operator-7b75f466d4-p9zdh" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.185989 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-7b75f466d4-p9zdh" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.387664 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h7wvh" podUID="eb46ce68-4ab9-40e4-8bb4-12603a4cd384" containerName="registry-server" containerID="cri-o://9f361f0f451ca99748606194700ed45533922f4da62a107e793183b70dbe5337" gracePeriod=2 Dec 03 00:20:08 crc kubenswrapper[3561]: E1203 00:20:08.620224 3561 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb46ce68_4ab9_40e4_8bb4_12603a4cd384.slice/crio-9f361f0f451ca99748606194700ed45533922f4da62a107e793183b70dbe5337.scope\": RecentStats: unable to find data in memory cache]" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.651072 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-756db8c48b-2gw2f"] Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.651227 3561 topology_manager.go:215] "Topology Admit Handler" podUID="aea5f24e-a2a6-4723-9e00-144471ed49cd" podNamespace="service-telemetry" podName="elastic-operator-756db8c48b-2gw2f" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.652087 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-756db8c48b-2gw2f" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.655150 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.655778 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-6bsc8" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.695400 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-756db8c48b-2gw2f"] Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.706838 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aea5f24e-a2a6-4723-9e00-144471ed49cd-webhook-cert\") pod \"elastic-operator-756db8c48b-2gw2f\" (UID: \"aea5f24e-a2a6-4723-9e00-144471ed49cd\") " pod="service-telemetry/elastic-operator-756db8c48b-2gw2f" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.707174 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh7lp\" (UniqueName: \"kubernetes.io/projected/aea5f24e-a2a6-4723-9e00-144471ed49cd-kube-api-access-lh7lp\") pod \"elastic-operator-756db8c48b-2gw2f\" (UID: \"aea5f24e-a2a6-4723-9e00-144471ed49cd\") " pod="service-telemetry/elastic-operator-756db8c48b-2gw2f" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.707297 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aea5f24e-a2a6-4723-9e00-144471ed49cd-apiservice-cert\") pod \"elastic-operator-756db8c48b-2gw2f\" (UID: \"aea5f24e-a2a6-4723-9e00-144471ed49cd\") " pod="service-telemetry/elastic-operator-756db8c48b-2gw2f" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.809708 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lh7lp\" (UniqueName: \"kubernetes.io/projected/aea5f24e-a2a6-4723-9e00-144471ed49cd-kube-api-access-lh7lp\") pod \"elastic-operator-756db8c48b-2gw2f\" (UID: \"aea5f24e-a2a6-4723-9e00-144471ed49cd\") " pod="service-telemetry/elastic-operator-756db8c48b-2gw2f" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.809958 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aea5f24e-a2a6-4723-9e00-144471ed49cd-apiservice-cert\") pod \"elastic-operator-756db8c48b-2gw2f\" (UID: \"aea5f24e-a2a6-4723-9e00-144471ed49cd\") " pod="service-telemetry/elastic-operator-756db8c48b-2gw2f" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.810129 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aea5f24e-a2a6-4723-9e00-144471ed49cd-webhook-cert\") pod \"elastic-operator-756db8c48b-2gw2f\" (UID: \"aea5f24e-a2a6-4723-9e00-144471ed49cd\") " pod="service-telemetry/elastic-operator-756db8c48b-2gw2f" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.819649 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aea5f24e-a2a6-4723-9e00-144471ed49cd-webhook-cert\") pod \"elastic-operator-756db8c48b-2gw2f\" (UID: \"aea5f24e-a2a6-4723-9e00-144471ed49cd\") " pod="service-telemetry/elastic-operator-756db8c48b-2gw2f" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.843095 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh7lp\" (UniqueName: \"kubernetes.io/projected/aea5f24e-a2a6-4723-9e00-144471ed49cd-kube-api-access-lh7lp\") pod \"elastic-operator-756db8c48b-2gw2f\" (UID: \"aea5f24e-a2a6-4723-9e00-144471ed49cd\") " pod="service-telemetry/elastic-operator-756db8c48b-2gw2f" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.907499 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aea5f24e-a2a6-4723-9e00-144471ed49cd-apiservice-cert\") pod \"elastic-operator-756db8c48b-2gw2f\" (UID: \"aea5f24e-a2a6-4723-9e00-144471ed49cd\") " pod="service-telemetry/elastic-operator-756db8c48b-2gw2f" Dec 03 00:20:08 crc kubenswrapper[3561]: I1203 00:20:08.944419 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-7b75f466d4-p9zdh"] Dec 03 00:20:08 crc kubenswrapper[3561]: W1203 00:20:08.995730 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf32daf5_85c5_4180_b117_f3e012a60f99.slice/crio-0f371f4eee0c0354918475ea366b93031f1a48af31f3b848a3f49ea4cddc5f0b WatchSource:0}: Error finding container 0f371f4eee0c0354918475ea366b93031f1a48af31f3b848a3f49ea4cddc5f0b: Status 404 returned error can't find the container with id 0f371f4eee0c0354918475ea366b93031f1a48af31f3b848a3f49ea4cddc5f0b Dec 03 00:20:09 crc kubenswrapper[3561]: I1203 00:20:09.120373 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-756db8c48b-2gw2f" Dec 03 00:20:09 crc kubenswrapper[3561]: I1203 00:20:09.461119 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8141457f-4211-4f39-a116-f6d971976b48-util" (OuterVolumeSpecName: "util") pod "8141457f-4211-4f39-a116-f6d971976b48" (UID: "8141457f-4211-4f39-a116-f6d971976b48"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:09 crc kubenswrapper[3561]: I1203 00:20:09.484507 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-7b75f466d4-p9zdh" event={"ID":"af32daf5-85c5-4180-b117-f3e012a60f99","Type":"ContainerStarted","Data":"0f371f4eee0c0354918475ea366b93031f1a48af31f3b848a3f49ea4cddc5f0b"} Dec 03 00:20:09 crc kubenswrapper[3561]: I1203 00:20:09.511402 3561 reconciler_common.go:300] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8141457f-4211-4f39-a116-f6d971976b48-util\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:09 crc kubenswrapper[3561]: I1203 00:20:09.526532 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9730140c-48cc-4687-ba52-9049cf40283e-bundle" (OuterVolumeSpecName: "bundle") pod "9730140c-48cc-4687-ba52-9049cf40283e" (UID: "9730140c-48cc-4687-ba52-9049cf40283e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:09 crc kubenswrapper[3561]: I1203 00:20:09.612442 3561 reconciler_common.go:300] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9730140c-48cc-4687-ba52-9049cf40283e-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:09 crc kubenswrapper[3561]: W1203 00:20:09.675673 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaea5f24e_a2a6_4723_9e00_144471ed49cd.slice/crio-2fb98a21ce32e85357d2c2bdff68d6bd2a16fa88dc5147addb4dc4dc33ed58f6 WatchSource:0}: Error finding container 2fb98a21ce32e85357d2c2bdff68d6bd2a16fa88dc5147addb4dc4dc33ed58f6: Status 404 returned error can't find the container with id 2fb98a21ce32e85357d2c2bdff68d6bd2a16fa88dc5147addb4dc4dc33ed58f6 Dec 03 00:20:09 crc kubenswrapper[3561]: I1203 00:20:09.682340 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-756db8c48b-2gw2f"] Dec 03 00:20:10 crc kubenswrapper[3561]: I1203 00:20:10.491043 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-756db8c48b-2gw2f" event={"ID":"aea5f24e-a2a6-4723-9e00-144471ed49cd","Type":"ContainerStarted","Data":"2fb98a21ce32e85357d2c2bdff68d6bd2a16fa88dc5147addb4dc4dc33ed58f6"} Dec 03 00:20:14 crc kubenswrapper[3561]: E1203 00:20:14.617281 3561 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9f361f0f451ca99748606194700ed45533922f4da62a107e793183b70dbe5337 is running failed: container process not found" containerID="9f361f0f451ca99748606194700ed45533922f4da62a107e793183b70dbe5337" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 00:20:14 crc kubenswrapper[3561]: E1203 00:20:14.618517 3561 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9f361f0f451ca99748606194700ed45533922f4da62a107e793183b70dbe5337 is running failed: container process not found" containerID="9f361f0f451ca99748606194700ed45533922f4da62a107e793183b70dbe5337" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 00:20:14 crc kubenswrapper[3561]: E1203 00:20:14.619031 3561 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9f361f0f451ca99748606194700ed45533922f4da62a107e793183b70dbe5337 is running failed: container process not found" containerID="9f361f0f451ca99748606194700ed45533922f4da62a107e793183b70dbe5337" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 00:20:14 crc kubenswrapper[3561]: E1203 00:20:14.619103 3561 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9f361f0f451ca99748606194700ed45533922f4da62a107e793183b70dbe5337 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-h7wvh" podUID="eb46ce68-4ab9-40e4-8bb4-12603a4cd384" containerName="registry-server" Dec 03 00:20:17 crc kubenswrapper[3561]: I1203 00:20:17.336776 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-h7wvh_eb46ce68-4ab9-40e4-8bb4-12603a4cd384/registry-server/0.log" Dec 03 00:20:17 crc kubenswrapper[3561]: I1203 00:20:17.337705 3561 generic.go:334] "Generic (PLEG): container finished" podID="eb46ce68-4ab9-40e4-8bb4-12603a4cd384" containerID="9f361f0f451ca99748606194700ed45533922f4da62a107e793183b70dbe5337" exitCode=-1 Dec 03 00:20:17 crc kubenswrapper[3561]: I1203 00:20:17.337738 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7wvh" event={"ID":"eb46ce68-4ab9-40e4-8bb4-12603a4cd384","Type":"ContainerDied","Data":"9f361f0f451ca99748606194700ed45533922f4da62a107e793183b70dbe5337"} Dec 03 00:20:19 crc kubenswrapper[3561]: I1203 00:20:19.296452 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9730140c-48cc-4687-ba52-9049cf40283e-util" (OuterVolumeSpecName: "util") pod "9730140c-48cc-4687-ba52-9049cf40283e" (UID: "9730140c-48cc-4687-ba52-9049cf40283e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:19 crc kubenswrapper[3561]: I1203 00:20:19.306470 3561 reconciler_common.go:300] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9730140c-48cc-4687-ba52-9049cf40283e-util\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:19 crc kubenswrapper[3561]: I1203 00:20:19.333836 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7b8992b-c566-4f5b-830e-b6754d5b0c22-util" (OuterVolumeSpecName: "util") pod "b7b8992b-c566-4f5b-830e-b6754d5b0c22" (UID: "b7b8992b-c566-4f5b-830e-b6754d5b0c22"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:19 crc kubenswrapper[3561]: I1203 00:20:19.407531 3561 reconciler_common.go:300] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7b8992b-c566-4f5b-830e-b6754d5b0c22-util\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:21 crc kubenswrapper[3561]: I1203 00:20:21.587455 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h7wvh" Dec 03 00:20:21 crc kubenswrapper[3561]: I1203 00:20:21.653821 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-utilities\") pod \"eb46ce68-4ab9-40e4-8bb4-12603a4cd384\" (UID: \"eb46ce68-4ab9-40e4-8bb4-12603a4cd384\") " Dec 03 00:20:21 crc kubenswrapper[3561]: I1203 00:20:21.653918 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-catalog-content\") pod \"eb46ce68-4ab9-40e4-8bb4-12603a4cd384\" (UID: \"eb46ce68-4ab9-40e4-8bb4-12603a4cd384\") " Dec 03 00:20:21 crc kubenswrapper[3561]: I1203 00:20:21.654016 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w628\" (UniqueName: \"kubernetes.io/projected/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-kube-api-access-4w628\") pod \"eb46ce68-4ab9-40e4-8bb4-12603a4cd384\" (UID: \"eb46ce68-4ab9-40e4-8bb4-12603a4cd384\") " Dec 03 00:20:21 crc kubenswrapper[3561]: I1203 00:20:21.655086 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-utilities" (OuterVolumeSpecName: "utilities") pod "eb46ce68-4ab9-40e4-8bb4-12603a4cd384" (UID: "eb46ce68-4ab9-40e4-8bb4-12603a4cd384"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:21 crc kubenswrapper[3561]: I1203 00:20:21.677117 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-kube-api-access-4w628" (OuterVolumeSpecName: "kube-api-access-4w628") pod "eb46ce68-4ab9-40e4-8bb4-12603a4cd384" (UID: "eb46ce68-4ab9-40e4-8bb4-12603a4cd384"). InnerVolumeSpecName "kube-api-access-4w628". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:20:21 crc kubenswrapper[3561]: I1203 00:20:21.759319 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4w628\" (UniqueName: \"kubernetes.io/projected/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-kube-api-access-4w628\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:21 crc kubenswrapper[3561]: I1203 00:20:21.759363 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:21 crc kubenswrapper[3561]: I1203 00:20:21.931222 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb46ce68-4ab9-40e4-8bb4-12603a4cd384" (UID: "eb46ce68-4ab9-40e4-8bb4-12603a4cd384"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:21 crc kubenswrapper[3561]: I1203 00:20:21.962599 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb46ce68-4ab9-40e4-8bb4-12603a4cd384-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:22 crc kubenswrapper[3561]: I1203 00:20:22.395780 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7wvh" event={"ID":"eb46ce68-4ab9-40e4-8bb4-12603a4cd384","Type":"ContainerDied","Data":"20aa2094ffb299883bee3a9dee4fa37465beebb3793819146127d46c445fafcb"} Dec 03 00:20:22 crc kubenswrapper[3561]: I1203 00:20:22.395833 3561 scope.go:117] "RemoveContainer" containerID="9f361f0f451ca99748606194700ed45533922f4da62a107e793183b70dbe5337" Dec 03 00:20:22 crc kubenswrapper[3561]: I1203 00:20:22.395937 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h7wvh" Dec 03 00:20:22 crc kubenswrapper[3561]: I1203 00:20:22.413645 3561 generic.go:334] "Generic (PLEG): container finished" podID="0233ee5d-3504-4f2f-a5ed-a4ea595c4f46" containerID="466057ed07a08803610cefcb3908f83fcb16e15226ad63aa595c27acdbc6a82f" exitCode=0 Dec 03 00:20:22 crc kubenswrapper[3561]: I1203 00:20:22.413704 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7fkh" event={"ID":"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46","Type":"ContainerDied","Data":"466057ed07a08803610cefcb3908f83fcb16e15226ad63aa595c27acdbc6a82f"} Dec 03 00:20:22 crc kubenswrapper[3561]: I1203 00:20:22.489643 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h7wvh"] Dec 03 00:20:22 crc kubenswrapper[3561]: I1203 00:20:22.497743 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h7wvh"] Dec 03 00:20:23 crc kubenswrapper[3561]: I1203 00:20:23.376758 3561 scope.go:117] "RemoveContainer" containerID="98235811dfac9c80417076cf85e66df472f5a4abf975567a27c1c5448cc90b74" Dec 03 00:20:23 crc kubenswrapper[3561]: I1203 00:20:23.676044 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb46ce68-4ab9-40e4-8bb4-12603a4cd384" path="/var/lib/kubelet/pods/eb46ce68-4ab9-40e4-8bb4-12603a4cd384/volumes" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.228469 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-864b67f9b9-vw4dk"] Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.228682 3561 topology_manager.go:215] "Topology Admit Handler" podUID="c267c58e-aca9-4d40-9433-6ac42b961c69" podNamespace="openshift-operators" podName="obo-prometheus-operator-864b67f9b9-vw4dk" Dec 03 00:20:24 crc kubenswrapper[3561]: E1203 00:20:24.228898 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="eb46ce68-4ab9-40e4-8bb4-12603a4cd384" containerName="extract-utilities" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.228923 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb46ce68-4ab9-40e4-8bb4-12603a4cd384" containerName="extract-utilities" Dec 03 00:20:24 crc kubenswrapper[3561]: E1203 00:20:24.228946 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="eb46ce68-4ab9-40e4-8bb4-12603a4cd384" containerName="extract-content" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.228955 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb46ce68-4ab9-40e4-8bb4-12603a4cd384" containerName="extract-content" Dec 03 00:20:24 crc kubenswrapper[3561]: E1203 00:20:24.228971 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="eb46ce68-4ab9-40e4-8bb4-12603a4cd384" containerName="registry-server" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.228987 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb46ce68-4ab9-40e4-8bb4-12603a4cd384" containerName="registry-server" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.229156 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb46ce68-4ab9-40e4-8bb4-12603a4cd384" containerName="registry-server" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.229644 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-864b67f9b9-vw4dk" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.232190 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-9qj5b" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.232498 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.233002 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.343855 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-864b67f9b9-vw4dk"] Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.352976 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-l678j"] Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.353172 3561 topology_manager.go:215] "Topology Admit Handler" podUID="04b977ab-167b-4c68-8e41-ab6cae4d68c0" podNamespace="openshift-operators" podName="obo-prometheus-operator-admission-webhook-644ff5b658-l678j" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.353931 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-l678j" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.357130 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.365718 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-j77p7" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.366579 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-l678j"] Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.399507 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jkjw\" (UniqueName: \"kubernetes.io/projected/c267c58e-aca9-4d40-9433-6ac42b961c69-kube-api-access-5jkjw\") pod \"obo-prometheus-operator-864b67f9b9-vw4dk\" (UID: \"c267c58e-aca9-4d40-9433-6ac42b961c69\") " pod="openshift-operators/obo-prometheus-operator-864b67f9b9-vw4dk" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.420012 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx"] Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.420140 3561 topology_manager.go:215] "Topology Admit Handler" podUID="d8202e2e-7630-4d92-aede-476610ebb07c" podNamespace="openshift-operators" podName="obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.420964 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.444148 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx"] Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.448680 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-65df589ff7-p7p4m"] Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.450591 3561 topology_manager.go:215] "Topology Admit Handler" podUID="e5e45422-f52d-4358-acd4-50f60e173df6" podNamespace="openshift-operators" podName="observability-operator-65df589ff7-p7p4m" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.451741 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-65df589ff7-p7p4m" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.458030 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-pqwd9" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.458197 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.469149 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-65df589ff7-p7p4m"] Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.502628 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04b977ab-167b-4c68-8e41-ab6cae4d68c0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-644ff5b658-l678j\" (UID: \"04b977ab-167b-4c68-8e41-ab6cae4d68c0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-l678j" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.502715 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8202e2e-7630-4d92-aede-476610ebb07c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx\" (UID: \"d8202e2e-7630-4d92-aede-476610ebb07c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.502757 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5jkjw\" (UniqueName: \"kubernetes.io/projected/c267c58e-aca9-4d40-9433-6ac42b961c69-kube-api-access-5jkjw\") pod \"obo-prometheus-operator-864b67f9b9-vw4dk\" (UID: \"c267c58e-aca9-4d40-9433-6ac42b961c69\") " pod="openshift-operators/obo-prometheus-operator-864b67f9b9-vw4dk" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.502797 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5e45422-f52d-4358-acd4-50f60e173df6-observability-operator-tls\") pod \"observability-operator-65df589ff7-p7p4m\" (UID: \"e5e45422-f52d-4358-acd4-50f60e173df6\") " pod="openshift-operators/observability-operator-65df589ff7-p7p4m" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.502832 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b65b\" (UniqueName: \"kubernetes.io/projected/e5e45422-f52d-4358-acd4-50f60e173df6-kube-api-access-9b65b\") pod \"observability-operator-65df589ff7-p7p4m\" (UID: \"e5e45422-f52d-4358-acd4-50f60e173df6\") " pod="openshift-operators/observability-operator-65df589ff7-p7p4m" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.502859 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8202e2e-7630-4d92-aede-476610ebb07c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx\" (UID: \"d8202e2e-7630-4d92-aede-476610ebb07c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.502893 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04b977ab-167b-4c68-8e41-ab6cae4d68c0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-644ff5b658-l678j\" (UID: \"04b977ab-167b-4c68-8e41-ab6cae4d68c0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-l678j" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.599005 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jkjw\" (UniqueName: \"kubernetes.io/projected/c267c58e-aca9-4d40-9433-6ac42b961c69-kube-api-access-5jkjw\") pod \"obo-prometheus-operator-864b67f9b9-vw4dk\" (UID: \"c267c58e-aca9-4d40-9433-6ac42b961c69\") " pod="openshift-operators/obo-prometheus-operator-864b67f9b9-vw4dk" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.603962 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8202e2e-7630-4d92-aede-476610ebb07c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx\" (UID: \"d8202e2e-7630-4d92-aede-476610ebb07c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.604050 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5e45422-f52d-4358-acd4-50f60e173df6-observability-operator-tls\") pod \"observability-operator-65df589ff7-p7p4m\" (UID: \"e5e45422-f52d-4358-acd4-50f60e173df6\") " pod="openshift-operators/observability-operator-65df589ff7-p7p4m" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.604099 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9b65b\" (UniqueName: \"kubernetes.io/projected/e5e45422-f52d-4358-acd4-50f60e173df6-kube-api-access-9b65b\") pod \"observability-operator-65df589ff7-p7p4m\" (UID: \"e5e45422-f52d-4358-acd4-50f60e173df6\") " pod="openshift-operators/observability-operator-65df589ff7-p7p4m" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.604148 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8202e2e-7630-4d92-aede-476610ebb07c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx\" (UID: \"d8202e2e-7630-4d92-aede-476610ebb07c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.604186 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04b977ab-167b-4c68-8e41-ab6cae4d68c0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-644ff5b658-l678j\" (UID: \"04b977ab-167b-4c68-8e41-ab6cae4d68c0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-l678j" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.604218 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04b977ab-167b-4c68-8e41-ab6cae4d68c0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-644ff5b658-l678j\" (UID: \"04b977ab-167b-4c68-8e41-ab6cae4d68c0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-l678j" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.606735 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-574fd8d65d-gxgqt"] Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.606850 3561 topology_manager.go:215] "Topology Admit Handler" podUID="c240c075-98b5-4b3e-8bf2-4f7f17b715ba" podNamespace="openshift-operators" podName="perses-operator-574fd8d65d-gxgqt" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.616226 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8202e2e-7630-4d92-aede-476610ebb07c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx\" (UID: \"d8202e2e-7630-4d92-aede-476610ebb07c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.618684 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-574fd8d65d-gxgqt" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.620993 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5e45422-f52d-4358-acd4-50f60e173df6-observability-operator-tls\") pod \"observability-operator-65df589ff7-p7p4m\" (UID: \"e5e45422-f52d-4358-acd4-50f60e173df6\") " pod="openshift-operators/observability-operator-65df589ff7-p7p4m" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.622291 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-t4mn5" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.629664 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04b977ab-167b-4c68-8e41-ab6cae4d68c0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-644ff5b658-l678j\" (UID: \"04b977ab-167b-4c68-8e41-ab6cae4d68c0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-l678j" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.629708 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8202e2e-7630-4d92-aede-476610ebb07c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx\" (UID: \"d8202e2e-7630-4d92-aede-476610ebb07c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.629743 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04b977ab-167b-4c68-8e41-ab6cae4d68c0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-644ff5b658-l678j\" (UID: \"04b977ab-167b-4c68-8e41-ab6cae4d68c0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-l678j" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.637754 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-574fd8d65d-gxgqt"] Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.671888 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b65b\" (UniqueName: \"kubernetes.io/projected/e5e45422-f52d-4358-acd4-50f60e173df6-kube-api-access-9b65b\") pod \"observability-operator-65df589ff7-p7p4m\" (UID: \"e5e45422-f52d-4358-acd4-50f60e173df6\") " pod="openshift-operators/observability-operator-65df589ff7-p7p4m" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.705061 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rqkk\" (UniqueName: \"kubernetes.io/projected/c240c075-98b5-4b3e-8bf2-4f7f17b715ba-kube-api-access-6rqkk\") pod \"perses-operator-574fd8d65d-gxgqt\" (UID: \"c240c075-98b5-4b3e-8bf2-4f7f17b715ba\") " pod="openshift-operators/perses-operator-574fd8d65d-gxgqt" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.705114 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c240c075-98b5-4b3e-8bf2-4f7f17b715ba-openshift-service-ca\") pod \"perses-operator-574fd8d65d-gxgqt\" (UID: \"c240c075-98b5-4b3e-8bf2-4f7f17b715ba\") " pod="openshift-operators/perses-operator-574fd8d65d-gxgqt" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.707403 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-l678j" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.749852 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.778776 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-65df589ff7-p7p4m" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.806234 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6rqkk\" (UniqueName: \"kubernetes.io/projected/c240c075-98b5-4b3e-8bf2-4f7f17b715ba-kube-api-access-6rqkk\") pod \"perses-operator-574fd8d65d-gxgqt\" (UID: \"c240c075-98b5-4b3e-8bf2-4f7f17b715ba\") " pod="openshift-operators/perses-operator-574fd8d65d-gxgqt" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.806323 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c240c075-98b5-4b3e-8bf2-4f7f17b715ba-openshift-service-ca\") pod \"perses-operator-574fd8d65d-gxgqt\" (UID: \"c240c075-98b5-4b3e-8bf2-4f7f17b715ba\") " pod="openshift-operators/perses-operator-574fd8d65d-gxgqt" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.807370 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c240c075-98b5-4b3e-8bf2-4f7f17b715ba-openshift-service-ca\") pod \"perses-operator-574fd8d65d-gxgqt\" (UID: \"c240c075-98b5-4b3e-8bf2-4f7f17b715ba\") " pod="openshift-operators/perses-operator-574fd8d65d-gxgqt" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.849915 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-864b67f9b9-vw4dk" Dec 03 00:20:24 crc kubenswrapper[3561]: I1203 00:20:24.855830 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rqkk\" (UniqueName: \"kubernetes.io/projected/c240c075-98b5-4b3e-8bf2-4f7f17b715ba-kube-api-access-6rqkk\") pod \"perses-operator-574fd8d65d-gxgqt\" (UID: \"c240c075-98b5-4b3e-8bf2-4f7f17b715ba\") " pod="openshift-operators/perses-operator-574fd8d65d-gxgqt" Dec 03 00:20:25 crc kubenswrapper[3561]: I1203 00:20:25.006288 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-574fd8d65d-gxgqt" Dec 03 00:20:27 crc kubenswrapper[3561]: I1203 00:20:27.623121 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:20:27 crc kubenswrapper[3561]: I1203 00:20:27.623436 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:20:27 crc kubenswrapper[3561]: I1203 00:20:27.623482 3561 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:20:27 crc kubenswrapper[3561]: I1203 00:20:27.624403 3561 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1be71d42620bb5792bd0a7738661749d3c399fe14e4bda9a97196271f69d892"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:20:27 crc kubenswrapper[3561]: I1203 00:20:27.624570 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://c1be71d42620bb5792bd0a7738661749d3c399fe14e4bda9a97196271f69d892" gracePeriod=600 Dec 03 00:20:29 crc kubenswrapper[3561]: I1203 00:20:29.564207 3561 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="c1be71d42620bb5792bd0a7738661749d3c399fe14e4bda9a97196271f69d892" exitCode=0 Dec 03 00:20:29 crc kubenswrapper[3561]: I1203 00:20:29.564259 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"c1be71d42620bb5792bd0a7738661749d3c399fe14e4bda9a97196271f69d892"} Dec 03 00:20:30 crc kubenswrapper[3561]: I1203 00:20:30.072742 3561 scope.go:117] "RemoveContainer" containerID="750f1f19ba18cbc9db3c0256fa35b923d9bf8ee6be44ef6c6d0c602b12fe5dd4" Dec 03 00:20:30 crc kubenswrapper[3561]: I1203 00:20:30.196828 3561 scope.go:117] "RemoveContainer" containerID="ffd7b60aaa4fceea735c7b0851d00a85fc76af1d7c20f8f90f8923adac5c0481" Dec 03 00:20:30 crc kubenswrapper[3561]: I1203 00:20:30.563457 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-864b67f9b9-vw4dk"] Dec 03 00:20:30 crc kubenswrapper[3561]: W1203 00:20:30.595840 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc267c58e_aca9_4d40_9433_6ac42b961c69.slice/crio-6b6a9fca38bba47992bb16f9e3cbdf1795e904609d8f693fcdfc60b1f7e5a749 WatchSource:0}: Error finding container 6b6a9fca38bba47992bb16f9e3cbdf1795e904609d8f693fcdfc60b1f7e5a749: Status 404 returned error can't find the container with id 6b6a9fca38bba47992bb16f9e3cbdf1795e904609d8f693fcdfc60b1f7e5a749 Dec 03 00:20:30 crc kubenswrapper[3561]: I1203 00:20:30.689432 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"ad8f66b514709c53336fec531be5d7c0dc6b2b71864cfc0012b90c3d7284ceea"} Dec 03 00:20:30 crc kubenswrapper[3561]: I1203 00:20:30.909466 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-574fd8d65d-gxgqt"] Dec 03 00:20:30 crc kubenswrapper[3561]: W1203 00:20:30.911924 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc240c075_98b5_4b3e_8bf2_4f7f17b715ba.slice/crio-cdae2c148e674be5f80492c9a1514a119742a81f0de75946a98ddbb4bf94b453 WatchSource:0}: Error finding container cdae2c148e674be5f80492c9a1514a119742a81f0de75946a98ddbb4bf94b453: Status 404 returned error can't find the container with id cdae2c148e674be5f80492c9a1514a119742a81f0de75946a98ddbb4bf94b453 Dec 03 00:20:30 crc kubenswrapper[3561]: I1203 00:20:30.913709 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx"] Dec 03 00:20:31 crc kubenswrapper[3561]: I1203 00:20:31.174747 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-l678j"] Dec 03 00:20:31 crc kubenswrapper[3561]: I1203 00:20:31.195370 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-65df589ff7-p7p4m"] Dec 03 00:20:31 crc kubenswrapper[3561]: I1203 00:20:31.694200 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-864b67f9b9-vw4dk" event={"ID":"c267c58e-aca9-4d40-9433-6ac42b961c69","Type":"ContainerStarted","Data":"6b6a9fca38bba47992bb16f9e3cbdf1795e904609d8f693fcdfc60b1f7e5a749"} Dec 03 00:20:31 crc kubenswrapper[3561]: I1203 00:20:31.697638 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7fkh" event={"ID":"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46","Type":"ContainerStarted","Data":"5298e7b7c5c8ccb8b76f7849af4c2a5d945b6453f83654cfc2721b23fee5541b"} Dec 03 00:20:31 crc kubenswrapper[3561]: I1203 00:20:31.699367 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx" event={"ID":"d8202e2e-7630-4d92-aede-476610ebb07c","Type":"ContainerStarted","Data":"ab90ea710d8a08d62f200ebfc2eb642fe3c79288eb2462705c8f13088c17bbf3"} Dec 03 00:20:31 crc kubenswrapper[3561]: I1203 00:20:31.700315 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-l678j" event={"ID":"04b977ab-167b-4c68-8e41-ab6cae4d68c0","Type":"ContainerStarted","Data":"fdc725a4fd13cc46dcdfaeb92bc164f1ded0b1b3a184d62370e38cfd82d40290"} Dec 03 00:20:31 crc kubenswrapper[3561]: I1203 00:20:31.701232 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-65df589ff7-p7p4m" event={"ID":"e5e45422-f52d-4358-acd4-50f60e173df6","Type":"ContainerStarted","Data":"a66eb80296cd56decdf7e881b60e2e5675a708e8ae9e21aabdebe04f70229968"} Dec 03 00:20:31 crc kubenswrapper[3561]: I1203 00:20:31.702272 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-7b75f466d4-p9zdh" event={"ID":"af32daf5-85c5-4180-b117-f3e012a60f99","Type":"ContainerStarted","Data":"0fbe65428188133be2b4eee9ec04f6186d71a225a903db2e45dcfcacba7e89ba"} Dec 03 00:20:31 crc kubenswrapper[3561]: I1203 00:20:31.703097 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-574fd8d65d-gxgqt" event={"ID":"c240c075-98b5-4b3e-8bf2-4f7f17b715ba","Type":"ContainerStarted","Data":"cdae2c148e674be5f80492c9a1514a119742a81f0de75946a98ddbb4bf94b453"} Dec 03 00:20:31 crc kubenswrapper[3561]: I1203 00:20:31.704282 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-756db8c48b-2gw2f" event={"ID":"aea5f24e-a2a6-4723-9e00-144471ed49cd","Type":"ContainerStarted","Data":"fabd12173bcc4d932c5721bfd8896b6fd04bb3196108468f752069645fa313d7"} Dec 03 00:20:31 crc kubenswrapper[3561]: I1203 00:20:31.716724 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f7fkh" podStartSLOduration=11.093827489 podStartE2EDuration="37.716666342s" podCreationTimestamp="2025-12-03 00:19:54 +0000 UTC" firstStartedPulling="2025-12-03 00:19:56.106182296 +0000 UTC m=+794.886616554" lastFinishedPulling="2025-12-03 00:20:22.729021149 +0000 UTC m=+821.509455407" observedRunningTime="2025-12-03 00:20:31.715344961 +0000 UTC m=+830.495779249" watchObservedRunningTime="2025-12-03 00:20:31.716666342 +0000 UTC m=+830.497100610" Dec 03 00:20:31 crc kubenswrapper[3561]: I1203 00:20:31.737142 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/elastic-operator-756db8c48b-2gw2f" podStartSLOduration=3.493927497 podStartE2EDuration="23.737094049s" podCreationTimestamp="2025-12-03 00:20:08 +0000 UTC" firstStartedPulling="2025-12-03 00:20:09.678856148 +0000 UTC m=+808.459290406" lastFinishedPulling="2025-12-03 00:20:29.9220227 +0000 UTC m=+828.702456958" observedRunningTime="2025-12-03 00:20:31.732412153 +0000 UTC m=+830.512846401" watchObservedRunningTime="2025-12-03 00:20:31.737094049 +0000 UTC m=+830.517528317" Dec 03 00:20:31 crc kubenswrapper[3561]: I1203 00:20:31.752898 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-7b75f466d4-p9zdh" podStartSLOduration=3.591466903 podStartE2EDuration="24.752857481s" podCreationTimestamp="2025-12-03 00:20:07 +0000 UTC" firstStartedPulling="2025-12-03 00:20:09.003269446 +0000 UTC m=+807.783703694" lastFinishedPulling="2025-12-03 00:20:30.164660014 +0000 UTC m=+828.945094272" observedRunningTime="2025-12-03 00:20:31.751151327 +0000 UTC m=+830.531585615" watchObservedRunningTime="2025-12-03 00:20:31.752857481 +0000 UTC m=+830.533291749" Dec 03 00:20:34 crc kubenswrapper[3561]: I1203 00:20:34.607690 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f7fkh" Dec 03 00:20:34 crc kubenswrapper[3561]: I1203 00:20:34.609191 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f7fkh" Dec 03 00:20:35 crc kubenswrapper[3561]: I1203 00:20:35.828309 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f7fkh" podUID="0233ee5d-3504-4f2f-a5ed-a4ea595c4f46" containerName="registry-server" probeResult="failure" output=< Dec 03 00:20:35 crc kubenswrapper[3561]: timeout: failed to connect service ":50051" within 1s Dec 03 00:20:35 crc kubenswrapper[3561]: > Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.036069 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.036495 3561 topology_manager.go:215] "Topology Admit Handler" podUID="da255b5b-e06c-4925-84bc-ed08b801a8b5" podNamespace="service-telemetry" podName="elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.037814 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.041929 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.042336 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-vm2db" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.042471 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.042598 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.042723 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.043749 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.043957 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.044066 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.044854 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.065278 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.105509 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.105580 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.105638 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.105664 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.105823 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/da255b5b-e06c-4925-84bc-ed08b801a8b5-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.105880 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.105989 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.106019 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.106045 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.106085 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.106108 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.106148 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.106176 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.106215 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.106244 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.207376 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.207433 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.207458 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.207488 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.207512 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.207595 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.207619 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.207653 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.207677 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.207702 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.207729 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.207752 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.207776 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.207801 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/da255b5b-e06c-4925-84bc-ed08b801a8b5-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.207822 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.208047 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.208924 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.209175 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.211648 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.213791 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.214064 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.215231 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.220415 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.220982 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/da255b5b-e06c-4925-84bc-ed08b801a8b5-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.221210 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.221673 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.226069 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.234310 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.236186 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.240506 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/da255b5b-e06c-4925-84bc-ed08b801a8b5-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"da255b5b-e06c-4925-84bc-ed08b801a8b5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:40 crc kubenswrapper[3561]: I1203 00:20:40.353811 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:41 crc kubenswrapper[3561]: I1203 00:20:41.518710 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:20:41 crc kubenswrapper[3561]: I1203 00:20:41.518989 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:20:41 crc kubenswrapper[3561]: I1203 00:20:41.519026 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:20:41 crc kubenswrapper[3561]: I1203 00:20:41.519042 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:20:41 crc kubenswrapper[3561]: I1203 00:20:41.519069 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:20:44 crc kubenswrapper[3561]: I1203 00:20:44.787301 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f7fkh" Dec 03 00:20:44 crc kubenswrapper[3561]: I1203 00:20:44.924244 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f7fkh" Dec 03 00:20:44 crc kubenswrapper[3561]: I1203 00:20:44.986196 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f7fkh"] Dec 03 00:20:45 crc kubenswrapper[3561]: I1203 00:20:45.849162 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f7fkh" podUID="0233ee5d-3504-4f2f-a5ed-a4ea595c4f46" containerName="registry-server" containerID="cri-o://5298e7b7c5c8ccb8b76f7849af4c2a5d945b6453f83654cfc2721b23fee5541b" gracePeriod=2 Dec 03 00:20:46 crc kubenswrapper[3561]: I1203 00:20:46.860159 3561 generic.go:334] "Generic (PLEG): container finished" podID="0233ee5d-3504-4f2f-a5ed-a4ea595c4f46" containerID="5298e7b7c5c8ccb8b76f7849af4c2a5d945b6453f83654cfc2721b23fee5541b" exitCode=0 Dec 03 00:20:46 crc kubenswrapper[3561]: I1203 00:20:46.860205 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7fkh" event={"ID":"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46","Type":"ContainerDied","Data":"5298e7b7c5c8ccb8b76f7849af4c2a5d945b6453f83654cfc2721b23fee5541b"} Dec 03 00:20:49 crc kubenswrapper[3561]: I1203 00:20:49.218479 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff"] Dec 03 00:20:49 crc kubenswrapper[3561]: I1203 00:20:49.218938 3561 topology_manager.go:215] "Topology Admit Handler" podUID="0eb6f441-29ca-4f0c-a7e9-69c5dee817e8" podNamespace="openshift-marketplace" podName="695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" Dec 03 00:20:49 crc kubenswrapper[3561]: I1203 00:20:49.220277 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" Dec 03 00:20:49 crc kubenswrapper[3561]: I1203 00:20:49.226106 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-4w6pc" Dec 03 00:20:49 crc kubenswrapper[3561]: I1203 00:20:49.252191 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff"] Dec 03 00:20:49 crc kubenswrapper[3561]: I1203 00:20:49.326641 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff\" (UID: \"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" Dec 03 00:20:49 crc kubenswrapper[3561]: I1203 00:20:49.326954 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff\" (UID: \"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" Dec 03 00:20:49 crc kubenswrapper[3561]: I1203 00:20:49.327116 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnklq\" (UniqueName: \"kubernetes.io/projected/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-kube-api-access-hnklq\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff\" (UID: \"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" Dec 03 00:20:49 crc kubenswrapper[3561]: I1203 00:20:49.427591 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hnklq\" (UniqueName: \"kubernetes.io/projected/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-kube-api-access-hnklq\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff\" (UID: \"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" Dec 03 00:20:49 crc kubenswrapper[3561]: I1203 00:20:49.428159 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff\" (UID: \"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" Dec 03 00:20:49 crc kubenswrapper[3561]: I1203 00:20:49.428713 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff\" (UID: \"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" Dec 03 00:20:49 crc kubenswrapper[3561]: I1203 00:20:49.429056 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff\" (UID: \"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" Dec 03 00:20:49 crc kubenswrapper[3561]: I1203 00:20:49.428659 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff\" (UID: \"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" Dec 03 00:20:49 crc kubenswrapper[3561]: I1203 00:20:49.457416 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnklq\" (UniqueName: \"kubernetes.io/projected/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-kube-api-access-hnklq\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff\" (UID: \"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" Dec 03 00:20:49 crc kubenswrapper[3561]: I1203 00:20:49.632047 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.369941 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff"] Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.382489 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f7fkh" Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.464397 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 03 00:20:52 crc kubenswrapper[3561]: W1203 00:20:52.486354 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda255b5b_e06c_4925_84bc_ed08b801a8b5.slice/crio-3e6adfb32e049cc012eec704f07b91a6056793e953a84e4b3063f63da26cc0da WatchSource:0}: Error finding container 3e6adfb32e049cc012eec704f07b91a6056793e953a84e4b3063f63da26cc0da: Status 404 returned error can't find the container with id 3e6adfb32e049cc012eec704f07b91a6056793e953a84e4b3063f63da26cc0da Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.492063 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-catalog-content\") pod \"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46\" (UID: \"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46\") " Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.492135 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-utilities\") pod \"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46\" (UID: \"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46\") " Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.492187 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmw4r\" (UniqueName: \"kubernetes.io/projected/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-kube-api-access-tmw4r\") pod \"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46\" (UID: \"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46\") " Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.493445 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-utilities" (OuterVolumeSpecName: "utilities") pod "0233ee5d-3504-4f2f-a5ed-a4ea595c4f46" (UID: "0233ee5d-3504-4f2f-a5ed-a4ea595c4f46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.556902 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-kube-api-access-tmw4r" (OuterVolumeSpecName: "kube-api-access-tmw4r") pod "0233ee5d-3504-4f2f-a5ed-a4ea595c4f46" (UID: "0233ee5d-3504-4f2f-a5ed-a4ea595c4f46"). InnerVolumeSpecName "kube-api-access-tmw4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.593406 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tmw4r\" (UniqueName: \"kubernetes.io/projected/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-kube-api-access-tmw4r\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.593448 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.947786 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"da255b5b-e06c-4925-84bc-ed08b801a8b5","Type":"ContainerStarted","Data":"3e6adfb32e049cc012eec704f07b91a6056793e953a84e4b3063f63da26cc0da"} Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.965645 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f7fkh" Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.966377 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7fkh" event={"ID":"0233ee5d-3504-4f2f-a5ed-a4ea595c4f46","Type":"ContainerDied","Data":"d1d8d5658db6760af4ecf80514c0a616b614552b84db9a4e61b152906b415c1a"} Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.966418 3561 scope.go:117] "RemoveContainer" containerID="5298e7b7c5c8ccb8b76f7849af4c2a5d945b6453f83654cfc2721b23fee5541b" Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.970466 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-65df589ff7-p7p4m" event={"ID":"e5e45422-f52d-4358-acd4-50f60e173df6","Type":"ContainerStarted","Data":"73a7868ebb2db646f42133a80a0f307b14faa4010c044f838d21b456b6216874"} Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.972009 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" event={"ID":"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8","Type":"ContainerStarted","Data":"929b245a003f77ff1b987d2a51b535deee256930ea339410d93438f96bd6a276"} Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.976717 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-574fd8d65d-gxgqt" event={"ID":"c240c075-98b5-4b3e-8bf2-4f7f17b715ba","Type":"ContainerStarted","Data":"f2f29b91b71c4fadccae8fede30c821b00cdec0e7b10aae2a64e675c7a5eff83"} Dec 03 00:20:52 crc kubenswrapper[3561]: I1203 00:20:52.977410 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-574fd8d65d-gxgqt" Dec 03 00:20:53 crc kubenswrapper[3561]: I1203 00:20:53.097945 3561 scope.go:117] "RemoveContainer" containerID="466057ed07a08803610cefcb3908f83fcb16e15226ad63aa595c27acdbc6a82f" Dec 03 00:20:53 crc kubenswrapper[3561]: I1203 00:20:53.154279 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operators/perses-operator-574fd8d65d-gxgqt" podStartSLOduration=7.98126905 podStartE2EDuration="29.154203618s" podCreationTimestamp="2025-12-03 00:20:24 +0000 UTC" firstStartedPulling="2025-12-03 00:20:30.968038551 +0000 UTC m=+829.748472809" lastFinishedPulling="2025-12-03 00:20:52.140973119 +0000 UTC m=+850.921407377" observedRunningTime="2025-12-03 00:20:53.117196263 +0000 UTC m=+851.897630521" watchObservedRunningTime="2025-12-03 00:20:53.154203618 +0000 UTC m=+851.934637876" Dec 03 00:20:53 crc kubenswrapper[3561]: I1203 00:20:53.155399 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operators/observability-operator-65df589ff7-p7p4m" podStartSLOduration=8.219052974 podStartE2EDuration="29.155375044s" podCreationTimestamp="2025-12-03 00:20:24 +0000 UTC" firstStartedPulling="2025-12-03 00:20:31.18159282 +0000 UTC m=+829.962027078" lastFinishedPulling="2025-12-03 00:20:52.11791489 +0000 UTC m=+850.898349148" observedRunningTime="2025-12-03 00:20:53.140228332 +0000 UTC m=+851.920662590" watchObservedRunningTime="2025-12-03 00:20:53.155375044 +0000 UTC m=+851.935809312" Dec 03 00:20:53 crc kubenswrapper[3561]: I1203 00:20:53.234207 3561 scope.go:117] "RemoveContainer" containerID="2e5c9f49e6895fd6932f13fa4ad8975cd4678dd5ef1569ddc7a957c6de14f097" Dec 03 00:20:53 crc kubenswrapper[3561]: I1203 00:20:53.701535 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0233ee5d-3504-4f2f-a5ed-a4ea595c4f46" (UID: "0233ee5d-3504-4f2f-a5ed-a4ea595c4f46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:53 crc kubenswrapper[3561]: I1203 00:20:53.721867 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:53 crc kubenswrapper[3561]: I1203 00:20:53.900357 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f7fkh"] Dec 03 00:20:53 crc kubenswrapper[3561]: I1203 00:20:53.906180 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f7fkh"] Dec 03 00:20:53 crc kubenswrapper[3561]: I1203 00:20:53.989526 3561 generic.go:334] "Generic (PLEG): container finished" podID="0eb6f441-29ca-4f0c-a7e9-69c5dee817e8" containerID="3c40e97080e51b510ab80d8dcbcdde513d2fab2df78f6de77c356607cbd0658b" exitCode=0 Dec 03 00:20:53 crc kubenswrapper[3561]: I1203 00:20:53.989601 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" event={"ID":"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8","Type":"ContainerDied","Data":"3c40e97080e51b510ab80d8dcbcdde513d2fab2df78f6de77c356607cbd0658b"} Dec 03 00:20:53 crc kubenswrapper[3561]: I1203 00:20:53.994623 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-864b67f9b9-vw4dk" event={"ID":"c267c58e-aca9-4d40-9433-6ac42b961c69","Type":"ContainerStarted","Data":"a91b23fd6f104d951066603b15924b97304c35f60ac6ac11fd90754f2e71da45"} Dec 03 00:20:53 crc kubenswrapper[3561]: I1203 00:20:53.999378 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx" event={"ID":"d8202e2e-7630-4d92-aede-476610ebb07c","Type":"ContainerStarted","Data":"08f5820f4201107e51bf1ce7cdc3227a19311ca34f0cba17e92e73b07f7a6412"} Dec 03 00:20:54 crc kubenswrapper[3561]: I1203 00:20:54.002479 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-l678j" event={"ID":"04b977ab-167b-4c68-8e41-ab6cae4d68c0","Type":"ContainerStarted","Data":"c04a572c97013d375d9c7c9795ff5889894276e3a3ca5a119def2c18d9c3dae8"} Dec 03 00:20:54 crc kubenswrapper[3561]: I1203 00:20:54.003815 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-65df589ff7-p7p4m" Dec 03 00:20:54 crc kubenswrapper[3561]: I1203 00:20:54.006446 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-65df589ff7-p7p4m" Dec 03 00:20:54 crc kubenswrapper[3561]: I1203 00:20:54.030449 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-864b67f9b9-vw4dk" podStartSLOduration=8.488325724 podStartE2EDuration="30.030405401s" podCreationTimestamp="2025-12-03 00:20:24 +0000 UTC" firstStartedPulling="2025-12-03 00:20:30.617656867 +0000 UTC m=+829.398091125" lastFinishedPulling="2025-12-03 00:20:52.159736544 +0000 UTC m=+850.940170802" observedRunningTime="2025-12-03 00:20:54.029110861 +0000 UTC m=+852.809545369" watchObservedRunningTime="2025-12-03 00:20:54.030405401 +0000 UTC m=+852.810839659" Dec 03 00:20:54 crc kubenswrapper[3561]: I1203 00:20:54.116064 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-l678j" podStartSLOduration=9.310064849 podStartE2EDuration="30.11594455s" podCreationTimestamp="2025-12-03 00:20:24 +0000 UTC" firstStartedPulling="2025-12-03 00:20:31.188399712 +0000 UTC m=+829.968833970" lastFinishedPulling="2025-12-03 00:20:51.994279413 +0000 UTC m=+850.774713671" observedRunningTime="2025-12-03 00:20:54.065327231 +0000 UTC m=+852.845761499" watchObservedRunningTime="2025-12-03 00:20:54.11594455 +0000 UTC m=+852.896378808" Dec 03 00:20:54 crc kubenswrapper[3561]: I1203 00:20:54.260052 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx" podStartSLOduration=9.116367779 podStartE2EDuration="30.259999134s" podCreationTimestamp="2025-12-03 00:20:24 +0000 UTC" firstStartedPulling="2025-12-03 00:20:30.969900149 +0000 UTC m=+829.750334407" lastFinishedPulling="2025-12-03 00:20:52.113531503 +0000 UTC m=+850.893965762" observedRunningTime="2025-12-03 00:20:54.242042114 +0000 UTC m=+853.022476372" watchObservedRunningTime="2025-12-03 00:20:54.259999134 +0000 UTC m=+853.040433402" Dec 03 00:20:55 crc kubenswrapper[3561]: I1203 00:20:55.670474 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0233ee5d-3504-4f2f-a5ed-a4ea595c4f46" path="/var/lib/kubelet/pods/0233ee5d-3504-4f2f-a5ed-a4ea595c4f46/volumes" Dec 03 00:21:01 crc kubenswrapper[3561]: I1203 00:21:01.052309 3561 generic.go:334] "Generic (PLEG): container finished" podID="0eb6f441-29ca-4f0c-a7e9-69c5dee817e8" containerID="74e89fd10c2808eca8b6b650cdf59ffff0cf8e8957765a6213c97ce6eefe4790" exitCode=0 Dec 03 00:21:01 crc kubenswrapper[3561]: I1203 00:21:01.052455 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" event={"ID":"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8","Type":"ContainerDied","Data":"74e89fd10c2808eca8b6b650cdf59ffff0cf8e8957765a6213c97ce6eefe4790"} Dec 03 00:21:05 crc kubenswrapper[3561]: I1203 00:21:05.009748 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-574fd8d65d-gxgqt" Dec 03 00:21:05 crc kubenswrapper[3561]: I1203 00:21:05.085263 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" event={"ID":"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8","Type":"ContainerStarted","Data":"34fbf5aade954c84eff6d5a0ef800d8b15b00a6ee2933576ec8a60442e863f8d"} Dec 03 00:21:05 crc kubenswrapper[3561]: I1203 00:21:05.101119 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" podStartSLOduration=9.876644773 podStartE2EDuration="16.101071888s" podCreationTimestamp="2025-12-03 00:20:49 +0000 UTC" firstStartedPulling="2025-12-03 00:20:53.992050195 +0000 UTC m=+852.772484453" lastFinishedPulling="2025-12-03 00:21:00.21647732 +0000 UTC m=+858.996911568" observedRunningTime="2025-12-03 00:21:05.09825594 +0000 UTC m=+863.878690198" watchObservedRunningTime="2025-12-03 00:21:05.101071888 +0000 UTC m=+863.881506146" Dec 03 00:21:06 crc kubenswrapper[3561]: I1203 00:21:06.091934 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"da255b5b-e06c-4925-84bc-ed08b801a8b5","Type":"ContainerStarted","Data":"b34599e38bb609c08a4f4daf8eaa14a428f1db4a0dc7489629738f97a34bd3a4"} Dec 03 00:21:06 crc kubenswrapper[3561]: I1203 00:21:06.094457 3561 generic.go:334] "Generic (PLEG): container finished" podID="0eb6f441-29ca-4f0c-a7e9-69c5dee817e8" containerID="34fbf5aade954c84eff6d5a0ef800d8b15b00a6ee2933576ec8a60442e863f8d" exitCode=0 Dec 03 00:21:06 crc kubenswrapper[3561]: I1203 00:21:06.094507 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" event={"ID":"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8","Type":"ContainerDied","Data":"34fbf5aade954c84eff6d5a0ef800d8b15b00a6ee2933576ec8a60442e863f8d"} Dec 03 00:21:06 crc kubenswrapper[3561]: I1203 00:21:06.376626 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 03 00:21:06 crc kubenswrapper[3561]: I1203 00:21:06.434930 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 03 00:21:07 crc kubenswrapper[3561]: I1203 00:21:07.740115 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" Dec 03 00:21:07 crc kubenswrapper[3561]: I1203 00:21:07.837477 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-util\") pod \"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8\" (UID: \"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8\") " Dec 03 00:21:07 crc kubenswrapper[3561]: I1203 00:21:07.837582 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnklq\" (UniqueName: \"kubernetes.io/projected/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-kube-api-access-hnklq\") pod \"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8\" (UID: \"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8\") " Dec 03 00:21:07 crc kubenswrapper[3561]: I1203 00:21:07.838611 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-bundle\") pod \"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8\" (UID: \"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8\") " Dec 03 00:21:07 crc kubenswrapper[3561]: I1203 00:21:07.839600 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-bundle" (OuterVolumeSpecName: "bundle") pod "0eb6f441-29ca-4f0c-a7e9-69c5dee817e8" (UID: "0eb6f441-29ca-4f0c-a7e9-69c5dee817e8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:21:07 crc kubenswrapper[3561]: I1203 00:21:07.844700 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-kube-api-access-hnklq" (OuterVolumeSpecName: "kube-api-access-hnklq") pod "0eb6f441-29ca-4f0c-a7e9-69c5dee817e8" (UID: "0eb6f441-29ca-4f0c-a7e9-69c5dee817e8"). InnerVolumeSpecName "kube-api-access-hnklq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:21:07 crc kubenswrapper[3561]: I1203 00:21:07.858143 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-util" (OuterVolumeSpecName: "util") pod "0eb6f441-29ca-4f0c-a7e9-69c5dee817e8" (UID: "0eb6f441-29ca-4f0c-a7e9-69c5dee817e8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:21:07 crc kubenswrapper[3561]: I1203 00:21:07.940532 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hnklq\" (UniqueName: \"kubernetes.io/projected/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-kube-api-access-hnklq\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:07 crc kubenswrapper[3561]: I1203 00:21:07.940585 3561 reconciler_common.go:300] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:07 crc kubenswrapper[3561]: I1203 00:21:07.940595 3561 reconciler_common.go:300] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0eb6f441-29ca-4f0c-a7e9-69c5dee817e8-util\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:08 crc kubenswrapper[3561]: I1203 00:21:08.106145 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" event={"ID":"0eb6f441-29ca-4f0c-a7e9-69c5dee817e8","Type":"ContainerDied","Data":"929b245a003f77ff1b987d2a51b535deee256930ea339410d93438f96bd6a276"} Dec 03 00:21:08 crc kubenswrapper[3561]: I1203 00:21:08.106465 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="929b245a003f77ff1b987d2a51b535deee256930ea339410d93438f96bd6a276" Dec 03 00:21:08 crc kubenswrapper[3561]: I1203 00:21:08.106173 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff" Dec 03 00:21:09 crc kubenswrapper[3561]: I1203 00:21:09.112705 3561 generic.go:334] "Generic (PLEG): container finished" podID="da255b5b-e06c-4925-84bc-ed08b801a8b5" containerID="b34599e38bb609c08a4f4daf8eaa14a428f1db4a0dc7489629738f97a34bd3a4" exitCode=0 Dec 03 00:21:09 crc kubenswrapper[3561]: I1203 00:21:09.112879 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"da255b5b-e06c-4925-84bc-ed08b801a8b5","Type":"ContainerDied","Data":"b34599e38bb609c08a4f4daf8eaa14a428f1db4a0dc7489629738f97a34bd3a4"} Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.130290 3561 generic.go:334] "Generic (PLEG): container finished" podID="da255b5b-e06c-4925-84bc-ed08b801a8b5" containerID="8e217123f2a6dfdf0ab9f5f897340d97e160085cc57572c1a762ecccedf2c01f" exitCode=0 Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.130490 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"da255b5b-e06c-4925-84bc-ed08b801a8b5","Type":"ContainerDied","Data":"8e217123f2a6dfdf0ab9f5f897340d97e160085cc57572c1a762ecccedf2c01f"} Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.499380 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-vgbbf"] Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.499520 3561 topology_manager.go:215] "Topology Admit Handler" podUID="cf8d647d-5031-4517-9d12-ec1c78c3af4d" podNamespace="cert-manager-operator" podName="cert-manager-operator-controller-manager-5774f55cb7-vgbbf" Dec 03 00:21:12 crc kubenswrapper[3561]: E1203 00:21:12.499759 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0eb6f441-29ca-4f0c-a7e9-69c5dee817e8" containerName="extract" Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.499784 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb6f441-29ca-4f0c-a7e9-69c5dee817e8" containerName="extract" Dec 03 00:21:12 crc kubenswrapper[3561]: E1203 00:21:12.499800 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0233ee5d-3504-4f2f-a5ed-a4ea595c4f46" containerName="extract-content" Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.499810 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="0233ee5d-3504-4f2f-a5ed-a4ea595c4f46" containerName="extract-content" Dec 03 00:21:12 crc kubenswrapper[3561]: E1203 00:21:12.499825 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0eb6f441-29ca-4f0c-a7e9-69c5dee817e8" containerName="pull" Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.499833 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb6f441-29ca-4f0c-a7e9-69c5dee817e8" containerName="pull" Dec 03 00:21:12 crc kubenswrapper[3561]: E1203 00:21:12.499848 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0eb6f441-29ca-4f0c-a7e9-69c5dee817e8" containerName="util" Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.499857 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb6f441-29ca-4f0c-a7e9-69c5dee817e8" containerName="util" Dec 03 00:21:12 crc kubenswrapper[3561]: E1203 00:21:12.499867 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0233ee5d-3504-4f2f-a5ed-a4ea595c4f46" containerName="registry-server" Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.499875 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="0233ee5d-3504-4f2f-a5ed-a4ea595c4f46" containerName="registry-server" Dec 03 00:21:12 crc kubenswrapper[3561]: E1203 00:21:12.499888 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0233ee5d-3504-4f2f-a5ed-a4ea595c4f46" containerName="extract-utilities" Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.499898 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="0233ee5d-3504-4f2f-a5ed-a4ea595c4f46" containerName="extract-utilities" Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.500052 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eb6f441-29ca-4f0c-a7e9-69c5dee817e8" containerName="extract" Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.500066 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="0233ee5d-3504-4f2f-a5ed-a4ea595c4f46" containerName="registry-server" Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.500598 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-vgbbf" Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.503043 3561 reflector.go:351] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-7vcbm" Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.506580 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.512516 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-vgbbf"] Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.515086 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.600431 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmpfz\" (UniqueName: \"kubernetes.io/projected/cf8d647d-5031-4517-9d12-ec1c78c3af4d-kube-api-access-wmpfz\") pod \"cert-manager-operator-controller-manager-5774f55cb7-vgbbf\" (UID: \"cf8d647d-5031-4517-9d12-ec1c78c3af4d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-vgbbf" Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.701752 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wmpfz\" (UniqueName: \"kubernetes.io/projected/cf8d647d-5031-4517-9d12-ec1c78c3af4d-kube-api-access-wmpfz\") pod \"cert-manager-operator-controller-manager-5774f55cb7-vgbbf\" (UID: \"cf8d647d-5031-4517-9d12-ec1c78c3af4d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-vgbbf" Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.723674 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmpfz\" (UniqueName: \"kubernetes.io/projected/cf8d647d-5031-4517-9d12-ec1c78c3af4d-kube-api-access-wmpfz\") pod \"cert-manager-operator-controller-manager-5774f55cb7-vgbbf\" (UID: \"cf8d647d-5031-4517-9d12-ec1c78c3af4d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-vgbbf" Dec 03 00:21:12 crc kubenswrapper[3561]: I1203 00:21:12.815236 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-vgbbf" Dec 03 00:21:13 crc kubenswrapper[3561]: I1203 00:21:13.286518 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-vgbbf"] Dec 03 00:21:14 crc kubenswrapper[3561]: I1203 00:21:14.143382 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"da255b5b-e06c-4925-84bc-ed08b801a8b5","Type":"ContainerStarted","Data":"1a40e1f52e5fd03cbc194de0149a370417aff30fb6f9ff6e905b0ba352f27fd3"} Dec 03 00:21:14 crc kubenswrapper[3561]: I1203 00:21:14.144344 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-vgbbf" event={"ID":"cf8d647d-5031-4517-9d12-ec1c78c3af4d","Type":"ContainerStarted","Data":"13ae012b010d2797c373b0cc3f00b464fac9c2cbfc6e2c973b7b3221b12b477c"} Dec 03 00:21:14 crc kubenswrapper[3561]: I1203 00:21:14.187373 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=21.481977758 podStartE2EDuration="34.187333399s" podCreationTimestamp="2025-12-03 00:20:40 +0000 UTC" firstStartedPulling="2025-12-03 00:20:52.488053687 +0000 UTC m=+851.268487955" lastFinishedPulling="2025-12-03 00:21:05.193409338 +0000 UTC m=+863.973843596" observedRunningTime="2025-12-03 00:21:14.18285166 +0000 UTC m=+872.963285928" watchObservedRunningTime="2025-12-03 00:21:14.187333399 +0000 UTC m=+872.967767647" Dec 03 00:21:15 crc kubenswrapper[3561]: I1203 00:21:15.354880 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.292803 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.293249 3561 topology_manager.go:215] "Topology Admit Handler" podUID="19d9b79e-aa10-454a-b243-53f9d92af37c" podNamespace="service-telemetry" podName="service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.307209 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.312771 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-6qmd9" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.315328 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.327048 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.327273 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.329868 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.381661 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjmtx\" (UniqueName: \"kubernetes.io/projected/19d9b79e-aa10-454a-b243-53f9d92af37c-kube-api-access-mjmtx\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.381712 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/19d9b79e-aa10-454a-b243-53f9d92af37c-builder-dockercfg-6qmd9-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.381738 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.381893 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/19d9b79e-aa10-454a-b243-53f9d92af37c-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.381936 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.381981 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/19d9b79e-aa10-454a-b243-53f9d92af37c-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.382043 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.382084 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.382116 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/19d9b79e-aa10-454a-b243-53f9d92af37c-builder-dockercfg-6qmd9-push\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.382177 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.382202 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.382230 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.483124 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.483182 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.483219 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/19d9b79e-aa10-454a-b243-53f9d92af37c-builder-dockercfg-6qmd9-push\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.483250 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.483285 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.483611 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.483748 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-mjmtx\" (UniqueName: \"kubernetes.io/projected/19d9b79e-aa10-454a-b243-53f9d92af37c-kube-api-access-mjmtx\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.484178 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/19d9b79e-aa10-454a-b243-53f9d92af37c-builder-dockercfg-6qmd9-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.484237 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.484309 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/19d9b79e-aa10-454a-b243-53f9d92af37c-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.484333 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.484391 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/19d9b79e-aa10-454a-b243-53f9d92af37c-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.484417 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/19d9b79e-aa10-454a-b243-53f9d92af37c-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.484570 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/19d9b79e-aa10-454a-b243-53f9d92af37c-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.484931 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.485012 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.485601 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.486166 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.492705 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.492825 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.493070 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.498756 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/19d9b79e-aa10-454a-b243-53f9d92af37c-builder-dockercfg-6qmd9-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.507033 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/19d9b79e-aa10-454a-b243-53f9d92af37c-builder-dockercfg-6qmd9-push\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.516892 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjmtx\" (UniqueName: \"kubernetes.io/projected/19d9b79e-aa10-454a-b243-53f9d92af37c-kube-api-access-mjmtx\") pod \"service-telemetry-operator-1-build\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:18 crc kubenswrapper[3561]: I1203 00:21:18.654085 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:21 crc kubenswrapper[3561]: I1203 00:21:21.220878 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:21:22 crc kubenswrapper[3561]: I1203 00:21:22.198799 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"19d9b79e-aa10-454a-b243-53f9d92af37c","Type":"ContainerStarted","Data":"bfc49d6e7e493bc5c74b3571f0f9561057010b9dd8f35b3f98761908c13c8a3b"} Dec 03 00:21:22 crc kubenswrapper[3561]: I1203 00:21:22.200495 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-vgbbf" event={"ID":"cf8d647d-5031-4517-9d12-ec1c78c3af4d","Type":"ContainerStarted","Data":"1aea30829b2628a6656bd9ad375ca4e4733e8afe397779e8de0870bb025283fa"} Dec 03 00:21:22 crc kubenswrapper[3561]: I1203 00:21:22.244756 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-vgbbf" podStartSLOduration=2.423483807 podStartE2EDuration="10.244708594s" podCreationTimestamp="2025-12-03 00:21:12 +0000 UTC" firstStartedPulling="2025-12-03 00:21:13.31671864 +0000 UTC m=+872.097152898" lastFinishedPulling="2025-12-03 00:21:21.137943427 +0000 UTC m=+879.918377685" observedRunningTime="2025-12-03 00:21:22.238910073 +0000 UTC m=+881.019344341" watchObservedRunningTime="2025-12-03 00:21:22.244708594 +0000 UTC m=+881.025142862" Dec 03 00:21:25 crc kubenswrapper[3561]: I1203 00:21:25.070277 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-58ffc98b58-6q7xn"] Dec 03 00:21:25 crc kubenswrapper[3561]: I1203 00:21:25.070741 3561 topology_manager.go:215] "Topology Admit Handler" podUID="7499489a-6d26-4a2d-b1e2-ffb9410d42cc" podNamespace="cert-manager" podName="cert-manager-webhook-58ffc98b58-6q7xn" Dec 03 00:21:25 crc kubenswrapper[3561]: I1203 00:21:25.071467 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-58ffc98b58-6q7xn" Dec 03 00:21:25 crc kubenswrapper[3561]: I1203 00:21:25.074641 3561 reflector.go:351] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-xs2gd" Dec 03 00:21:25 crc kubenswrapper[3561]: I1203 00:21:25.074869 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 03 00:21:25 crc kubenswrapper[3561]: I1203 00:21:25.075004 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 03 00:21:25 crc kubenswrapper[3561]: I1203 00:21:25.081163 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-58ffc98b58-6q7xn"] Dec 03 00:21:25 crc kubenswrapper[3561]: I1203 00:21:25.191060 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzp5g\" (UniqueName: \"kubernetes.io/projected/7499489a-6d26-4a2d-b1e2-ffb9410d42cc-kube-api-access-jzp5g\") pod \"cert-manager-webhook-58ffc98b58-6q7xn\" (UID: \"7499489a-6d26-4a2d-b1e2-ffb9410d42cc\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-6q7xn" Dec 03 00:21:25 crc kubenswrapper[3561]: I1203 00:21:25.191143 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7499489a-6d26-4a2d-b1e2-ffb9410d42cc-bound-sa-token\") pod \"cert-manager-webhook-58ffc98b58-6q7xn\" (UID: \"7499489a-6d26-4a2d-b1e2-ffb9410d42cc\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-6q7xn" Dec 03 00:21:25 crc kubenswrapper[3561]: I1203 00:21:25.292245 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7499489a-6d26-4a2d-b1e2-ffb9410d42cc-bound-sa-token\") pod \"cert-manager-webhook-58ffc98b58-6q7xn\" (UID: \"7499489a-6d26-4a2d-b1e2-ffb9410d42cc\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-6q7xn" Dec 03 00:21:25 crc kubenswrapper[3561]: I1203 00:21:25.292324 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-jzp5g\" (UniqueName: \"kubernetes.io/projected/7499489a-6d26-4a2d-b1e2-ffb9410d42cc-kube-api-access-jzp5g\") pod \"cert-manager-webhook-58ffc98b58-6q7xn\" (UID: \"7499489a-6d26-4a2d-b1e2-ffb9410d42cc\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-6q7xn" Dec 03 00:21:25 crc kubenswrapper[3561]: I1203 00:21:25.320979 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7499489a-6d26-4a2d-b1e2-ffb9410d42cc-bound-sa-token\") pod \"cert-manager-webhook-58ffc98b58-6q7xn\" (UID: \"7499489a-6d26-4a2d-b1e2-ffb9410d42cc\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-6q7xn" Dec 03 00:21:25 crc kubenswrapper[3561]: I1203 00:21:25.324106 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzp5g\" (UniqueName: \"kubernetes.io/projected/7499489a-6d26-4a2d-b1e2-ffb9410d42cc-kube-api-access-jzp5g\") pod \"cert-manager-webhook-58ffc98b58-6q7xn\" (UID: \"7499489a-6d26-4a2d-b1e2-ffb9410d42cc\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-6q7xn" Dec 03 00:21:25 crc kubenswrapper[3561]: I1203 00:21:25.408202 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-58ffc98b58-6q7xn" Dec 03 00:21:25 crc kubenswrapper[3561]: I1203 00:21:25.504698 3561 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="da255b5b-e06c-4925-84bc-ed08b801a8b5" containerName="elasticsearch" probeResult="failure" output=< Dec 03 00:21:25 crc kubenswrapper[3561]: {"timestamp": "2025-12-03T00:21:25+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 03 00:21:25 crc kubenswrapper[3561]: > Dec 03 00:21:26 crc kubenswrapper[3561]: I1203 00:21:26.845686 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-6dcc74f67d-p58t5"] Dec 03 00:21:26 crc kubenswrapper[3561]: I1203 00:21:26.845803 3561 topology_manager.go:215] "Topology Admit Handler" podUID="e3e6daa4-5a3f-453f-a047-c9dbbb1b9e6e" podNamespace="cert-manager" podName="cert-manager-cainjector-6dcc74f67d-p58t5" Dec 03 00:21:26 crc kubenswrapper[3561]: I1203 00:21:26.846396 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-6dcc74f67d-p58t5" Dec 03 00:21:26 crc kubenswrapper[3561]: I1203 00:21:26.850995 3561 reflector.go:351] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-d8sjc" Dec 03 00:21:26 crc kubenswrapper[3561]: I1203 00:21:26.940972 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-6dcc74f67d-p58t5"] Dec 03 00:21:27 crc kubenswrapper[3561]: I1203 00:21:27.032428 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e3e6daa4-5a3f-453f-a047-c9dbbb1b9e6e-bound-sa-token\") pod \"cert-manager-cainjector-6dcc74f67d-p58t5\" (UID: \"e3e6daa4-5a3f-453f-a047-c9dbbb1b9e6e\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-p58t5" Dec 03 00:21:27 crc kubenswrapper[3561]: I1203 00:21:27.032673 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqdmk\" (UniqueName: \"kubernetes.io/projected/e3e6daa4-5a3f-453f-a047-c9dbbb1b9e6e-kube-api-access-nqdmk\") pod \"cert-manager-cainjector-6dcc74f67d-p58t5\" (UID: \"e3e6daa4-5a3f-453f-a047-c9dbbb1b9e6e\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-p58t5" Dec 03 00:21:27 crc kubenswrapper[3561]: I1203 00:21:27.134519 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e3e6daa4-5a3f-453f-a047-c9dbbb1b9e6e-bound-sa-token\") pod \"cert-manager-cainjector-6dcc74f67d-p58t5\" (UID: \"e3e6daa4-5a3f-453f-a047-c9dbbb1b9e6e\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-p58t5" Dec 03 00:21:27 crc kubenswrapper[3561]: I1203 00:21:27.134656 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nqdmk\" (UniqueName: \"kubernetes.io/projected/e3e6daa4-5a3f-453f-a047-c9dbbb1b9e6e-kube-api-access-nqdmk\") pod \"cert-manager-cainjector-6dcc74f67d-p58t5\" (UID: \"e3e6daa4-5a3f-453f-a047-c9dbbb1b9e6e\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-p58t5" Dec 03 00:21:27 crc kubenswrapper[3561]: I1203 00:21:27.161005 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqdmk\" (UniqueName: \"kubernetes.io/projected/e3e6daa4-5a3f-453f-a047-c9dbbb1b9e6e-kube-api-access-nqdmk\") pod \"cert-manager-cainjector-6dcc74f67d-p58t5\" (UID: \"e3e6daa4-5a3f-453f-a047-c9dbbb1b9e6e\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-p58t5" Dec 03 00:21:27 crc kubenswrapper[3561]: I1203 00:21:27.173058 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e3e6daa4-5a3f-453f-a047-c9dbbb1b9e6e-bound-sa-token\") pod \"cert-manager-cainjector-6dcc74f67d-p58t5\" (UID: \"e3e6daa4-5a3f-453f-a047-c9dbbb1b9e6e\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-p58t5" Dec 03 00:21:27 crc kubenswrapper[3561]: I1203 00:21:27.463345 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-6dcc74f67d-p58t5" Dec 03 00:21:28 crc kubenswrapper[3561]: I1203 00:21:28.151163 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-6dcc74f67d-p58t5"] Dec 03 00:21:28 crc kubenswrapper[3561]: I1203 00:21:28.230858 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-58ffc98b58-6q7xn"] Dec 03 00:21:28 crc kubenswrapper[3561]: I1203 00:21:28.258949 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-6dcc74f67d-p58t5" event={"ID":"e3e6daa4-5a3f-453f-a047-c9dbbb1b9e6e","Type":"ContainerStarted","Data":"1a64317b2e0866609b2eb6692591d45c3d2bb403446ef8915cf010c5068bf1c8"} Dec 03 00:21:28 crc kubenswrapper[3561]: I1203 00:21:28.267137 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.267991 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"19d9b79e-aa10-454a-b243-53f9d92af37c","Type":"ContainerStarted","Data":"f071ed02f54a63881a63663cc4c8c8a79223e09e9616a6213189e526f7ad702e"} Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.268094 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="19d9b79e-aa10-454a-b243-53f9d92af37c" containerName="manage-dockerfile" containerID="cri-o://f071ed02f54a63881a63663cc4c8c8a79223e09e9616a6213189e526f7ad702e" gracePeriod=30 Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.272592 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-58ffc98b58-6q7xn" event={"ID":"7499489a-6d26-4a2d-b1e2-ffb9410d42cc","Type":"ContainerStarted","Data":"e9972b75062fe3ef8ce6e08e5d9b2606ea9beaf207f189588755e4e48f805bc4"} Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.809431 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_19d9b79e-aa10-454a-b243-53f9d92af37c/manage-dockerfile/0.log" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.809509 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.978077 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-container-storage-root\") pod \"19d9b79e-aa10-454a-b243-53f9d92af37c\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.978136 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-system-configs\") pod \"19d9b79e-aa10-454a-b243-53f9d92af37c\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.978197 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/19d9b79e-aa10-454a-b243-53f9d92af37c-node-pullsecrets\") pod \"19d9b79e-aa10-454a-b243-53f9d92af37c\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.978220 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-build-blob-cache\") pod \"19d9b79e-aa10-454a-b243-53f9d92af37c\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.978240 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-ca-bundles\") pod \"19d9b79e-aa10-454a-b243-53f9d92af37c\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.978301 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-container-storage-run\") pod \"19d9b79e-aa10-454a-b243-53f9d92af37c\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.978295 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19d9b79e-aa10-454a-b243-53f9d92af37c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "19d9b79e-aa10-454a-b243-53f9d92af37c" (UID: "19d9b79e-aa10-454a-b243-53f9d92af37c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.978605 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "19d9b79e-aa10-454a-b243-53f9d92af37c" (UID: "19d9b79e-aa10-454a-b243-53f9d92af37c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.978741 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "19d9b79e-aa10-454a-b243-53f9d92af37c" (UID: "19d9b79e-aa10-454a-b243-53f9d92af37c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.978782 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/19d9b79e-aa10-454a-b243-53f9d92af37c-builder-dockercfg-6qmd9-pull\") pod \"19d9b79e-aa10-454a-b243-53f9d92af37c\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.978842 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/19d9b79e-aa10-454a-b243-53f9d92af37c-buildcachedir\") pod \"19d9b79e-aa10-454a-b243-53f9d92af37c\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.978829 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "19d9b79e-aa10-454a-b243-53f9d92af37c" (UID: "19d9b79e-aa10-454a-b243-53f9d92af37c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.978891 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19d9b79e-aa10-454a-b243-53f9d92af37c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "19d9b79e-aa10-454a-b243-53f9d92af37c" (UID: "19d9b79e-aa10-454a-b243-53f9d92af37c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.978897 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-buildworkdir\") pod \"19d9b79e-aa10-454a-b243-53f9d92af37c\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.978952 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "19d9b79e-aa10-454a-b243-53f9d92af37c" (UID: "19d9b79e-aa10-454a-b243-53f9d92af37c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.979172 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "19d9b79e-aa10-454a-b243-53f9d92af37c" (UID: "19d9b79e-aa10-454a-b243-53f9d92af37c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.979203 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "19d9b79e-aa10-454a-b243-53f9d92af37c" (UID: "19d9b79e-aa10-454a-b243-53f9d92af37c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.979257 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/19d9b79e-aa10-454a-b243-53f9d92af37c-builder-dockercfg-6qmd9-push\") pod \"19d9b79e-aa10-454a-b243-53f9d92af37c\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.979287 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjmtx\" (UniqueName: \"kubernetes.io/projected/19d9b79e-aa10-454a-b243-53f9d92af37c-kube-api-access-mjmtx\") pod \"19d9b79e-aa10-454a-b243-53f9d92af37c\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.979654 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-proxy-ca-bundles\") pod \"19d9b79e-aa10-454a-b243-53f9d92af37c\" (UID: \"19d9b79e-aa10-454a-b243-53f9d92af37c\") " Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.979882 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.979900 3561 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/19d9b79e-aa10-454a-b243-53f9d92af37c-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.979910 3561 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.979921 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.979933 3561 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.979942 3561 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/19d9b79e-aa10-454a-b243-53f9d92af37c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.979951 3561 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/19d9b79e-aa10-454a-b243-53f9d92af37c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.979960 3561 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.980463 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "19d9b79e-aa10-454a-b243-53f9d92af37c" (UID: "19d9b79e-aa10-454a-b243-53f9d92af37c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.988786 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19d9b79e-aa10-454a-b243-53f9d92af37c-builder-dockercfg-6qmd9-push" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-push") pod "19d9b79e-aa10-454a-b243-53f9d92af37c" (UID: "19d9b79e-aa10-454a-b243-53f9d92af37c"). InnerVolumeSpecName "builder-dockercfg-6qmd9-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.988811 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19d9b79e-aa10-454a-b243-53f9d92af37c-kube-api-access-mjmtx" (OuterVolumeSpecName: "kube-api-access-mjmtx") pod "19d9b79e-aa10-454a-b243-53f9d92af37c" (UID: "19d9b79e-aa10-454a-b243-53f9d92af37c"). InnerVolumeSpecName "kube-api-access-mjmtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:21:29 crc kubenswrapper[3561]: I1203 00:21:29.990749 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19d9b79e-aa10-454a-b243-53f9d92af37c-builder-dockercfg-6qmd9-pull" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-pull") pod "19d9b79e-aa10-454a-b243-53f9d92af37c" (UID: "19d9b79e-aa10-454a-b243-53f9d92af37c"). InnerVolumeSpecName "builder-dockercfg-6qmd9-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.081294 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/19d9b79e-aa10-454a-b243-53f9d92af37c-builder-dockercfg-6qmd9-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.081335 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/19d9b79e-aa10-454a-b243-53f9d92af37c-builder-dockercfg-6qmd9-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.081350 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mjmtx\" (UniqueName: \"kubernetes.io/projected/19d9b79e-aa10-454a-b243-53f9d92af37c-kube-api-access-mjmtx\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.081363 3561 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/19d9b79e-aa10-454a-b243-53f9d92af37c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.296777 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_19d9b79e-aa10-454a-b243-53f9d92af37c/manage-dockerfile/0.log" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.296847 3561 generic.go:334] "Generic (PLEG): container finished" podID="19d9b79e-aa10-454a-b243-53f9d92af37c" containerID="f071ed02f54a63881a63663cc4c8c8a79223e09e9616a6213189e526f7ad702e" exitCode=1 Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.296887 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"19d9b79e-aa10-454a-b243-53f9d92af37c","Type":"ContainerDied","Data":"f071ed02f54a63881a63663cc4c8c8a79223e09e9616a6213189e526f7ad702e"} Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.296916 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"19d9b79e-aa10-454a-b243-53f9d92af37c","Type":"ContainerDied","Data":"bfc49d6e7e493bc5c74b3571f0f9561057010b9dd8f35b3f98761908c13c8a3b"} Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.296943 3561 scope.go:117] "RemoveContainer" containerID="f071ed02f54a63881a63663cc4c8c8a79223e09e9616a6213189e526f7ad702e" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.297094 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.340113 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.340270 3561 topology_manager.go:215] "Topology Admit Handler" podUID="9a00a082-1937-4f6e-b9f3-b27db40ab02d" podNamespace="service-telemetry" podName="service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: E1203 00:21:30.340432 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="19d9b79e-aa10-454a-b243-53f9d92af37c" containerName="manage-dockerfile" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.340442 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="19d9b79e-aa10-454a-b243-53f9d92af37c" containerName="manage-dockerfile" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.340562 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="19d9b79e-aa10-454a-b243-53f9d92af37c" containerName="manage-dockerfile" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.341321 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.347014 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-6qmd9" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.347240 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.351289 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.351475 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.414061 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.470157 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.490691 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.499022 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.499100 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.499132 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.499163 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.499194 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9a00a082-1937-4f6e-b9f3-b27db40ab02d-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.499215 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/9a00a082-1937-4f6e-b9f3-b27db40ab02d-builder-dockercfg-6qmd9-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.499249 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.499297 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.499329 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c68jb\" (UniqueName: \"kubernetes.io/projected/9a00a082-1937-4f6e-b9f3-b27db40ab02d-kube-api-access-c68jb\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.499350 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/9a00a082-1937-4f6e-b9f3-b27db40ab02d-builder-dockercfg-6qmd9-push\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.499377 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.499402 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9a00a082-1937-4f6e-b9f3-b27db40ab02d-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.552511 3561 scope.go:117] "RemoveContainer" containerID="f071ed02f54a63881a63663cc4c8c8a79223e09e9616a6213189e526f7ad702e" Dec 03 00:21:30 crc kubenswrapper[3561]: E1203 00:21:30.555159 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f071ed02f54a63881a63663cc4c8c8a79223e09e9616a6213189e526f7ad702e\": container with ID starting with f071ed02f54a63881a63663cc4c8c8a79223e09e9616a6213189e526f7ad702e not found: ID does not exist" containerID="f071ed02f54a63881a63663cc4c8c8a79223e09e9616a6213189e526f7ad702e" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.555240 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f071ed02f54a63881a63663cc4c8c8a79223e09e9616a6213189e526f7ad702e"} err="failed to get container status \"f071ed02f54a63881a63663cc4c8c8a79223e09e9616a6213189e526f7ad702e\": rpc error: code = NotFound desc = could not find container \"f071ed02f54a63881a63663cc4c8c8a79223e09e9616a6213189e526f7ad702e\": container with ID starting with f071ed02f54a63881a63663cc4c8c8a79223e09e9616a6213189e526f7ad702e not found: ID does not exist" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.601639 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.601738 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.601765 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.601800 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.601833 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9a00a082-1937-4f6e-b9f3-b27db40ab02d-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.601859 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/9a00a082-1937-4f6e-b9f3-b27db40ab02d-builder-dockercfg-6qmd9-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.602472 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.602571 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.602753 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.602796 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-c68jb\" (UniqueName: \"kubernetes.io/projected/9a00a082-1937-4f6e-b9f3-b27db40ab02d-kube-api-access-c68jb\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.602820 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.602841 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/9a00a082-1937-4f6e-b9f3-b27db40ab02d-builder-dockercfg-6qmd9-push\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.602870 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9a00a082-1937-4f6e-b9f3-b27db40ab02d-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.602990 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9a00a082-1937-4f6e-b9f3-b27db40ab02d-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.603242 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.603438 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.603940 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.605186 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.605195 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.605239 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9a00a082-1937-4f6e-b9f3-b27db40ab02d-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.605496 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.623439 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/9a00a082-1937-4f6e-b9f3-b27db40ab02d-builder-dockercfg-6qmd9-push\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.635273 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-c68jb\" (UniqueName: \"kubernetes.io/projected/9a00a082-1937-4f6e-b9f3-b27db40ab02d-kube-api-access-c68jb\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.651742 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/9a00a082-1937-4f6e-b9f3-b27db40ab02d-builder-dockercfg-6qmd9-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:30 crc kubenswrapper[3561]: I1203 00:21:30.714836 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:31 crc kubenswrapper[3561]: I1203 00:21:31.040331 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 03 00:21:31 crc kubenswrapper[3561]: I1203 00:21:31.301857 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:21:31 crc kubenswrapper[3561]: I1203 00:21:31.319418 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"9a00a082-1937-4f6e-b9f3-b27db40ab02d","Type":"ContainerStarted","Data":"fa1f419e6ac1f1071762f340dee95f7551c35d86345fb6390419d40bd4736062"} Dec 03 00:21:31 crc kubenswrapper[3561]: I1203 00:21:31.673084 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19d9b79e-aa10-454a-b243-53f9d92af37c" path="/var/lib/kubelet/pods/19d9b79e-aa10-454a-b243-53f9d92af37c/volumes" Dec 03 00:21:32 crc kubenswrapper[3561]: I1203 00:21:32.328844 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"9a00a082-1937-4f6e-b9f3-b27db40ab02d","Type":"ContainerStarted","Data":"5b0eefe949b04a48d622ada04186b41e49a8e88c5b949a997cf1d2917ce4c012"} Dec 03 00:21:34 crc kubenswrapper[3561]: I1203 00:21:34.340229 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-6dcc74f67d-p58t5" event={"ID":"e3e6daa4-5a3f-453f-a047-c9dbbb1b9e6e","Type":"ContainerStarted","Data":"804c92783247757ad44b0b0614574353b41b75cb8972553cef6669ae3810ad4c"} Dec 03 00:21:34 crc kubenswrapper[3561]: I1203 00:21:34.342380 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-58ffc98b58-6q7xn" event={"ID":"7499489a-6d26-4a2d-b1e2-ffb9410d42cc","Type":"ContainerStarted","Data":"7f2ea84eeb9a705b19d3faeed399e5b5d77f8f1e1e94743f36d7457827b9a981"} Dec 03 00:21:34 crc kubenswrapper[3561]: I1203 00:21:34.342764 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-58ffc98b58-6q7xn" Dec 03 00:21:34 crc kubenswrapper[3561]: I1203 00:21:34.360849 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-6dcc74f67d-p58t5" podStartSLOduration=2.785834528 podStartE2EDuration="8.360803263s" podCreationTimestamp="2025-12-03 00:21:26 +0000 UTC" firstStartedPulling="2025-12-03 00:21:28.201749157 +0000 UTC m=+886.982183415" lastFinishedPulling="2025-12-03 00:21:33.776717892 +0000 UTC m=+892.557152150" observedRunningTime="2025-12-03 00:21:34.355714174 +0000 UTC m=+893.136148432" watchObservedRunningTime="2025-12-03 00:21:34.360803263 +0000 UTC m=+893.141237531" Dec 03 00:21:34 crc kubenswrapper[3561]: I1203 00:21:34.381658 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-58ffc98b58-6q7xn" podStartSLOduration=3.879908813 podStartE2EDuration="9.381608972s" podCreationTimestamp="2025-12-03 00:21:25 +0000 UTC" firstStartedPulling="2025-12-03 00:21:28.25888745 +0000 UTC m=+887.039321708" lastFinishedPulling="2025-12-03 00:21:33.760587609 +0000 UTC m=+892.541021867" observedRunningTime="2025-12-03 00:21:34.379444495 +0000 UTC m=+893.159878753" watchObservedRunningTime="2025-12-03 00:21:34.381608972 +0000 UTC m=+893.162043230" Dec 03 00:21:40 crc kubenswrapper[3561]: I1203 00:21:40.418138 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-58ffc98b58-6q7xn" Dec 03 00:21:41 crc kubenswrapper[3561]: I1203 00:21:41.520254 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:21:41 crc kubenswrapper[3561]: I1203 00:21:41.520639 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:21:41 crc kubenswrapper[3561]: I1203 00:21:41.520699 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:21:41 crc kubenswrapper[3561]: I1203 00:21:41.520715 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:21:41 crc kubenswrapper[3561]: I1203 00:21:41.520739 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:21:42 crc kubenswrapper[3561]: I1203 00:21:42.953693 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wqstp"] Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:42.953814 3561 topology_manager.go:215] "Topology Admit Handler" podUID="05b6705c-6ffb-4b4e-8714-6238de53aef1" podNamespace="openshift-marketplace" podName="community-operators-wqstp" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:42.954996 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqstp" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:42.964665 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqstp"] Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:43.127501 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b6705c-6ffb-4b4e-8714-6238de53aef1-utilities\") pod \"community-operators-wqstp\" (UID: \"05b6705c-6ffb-4b4e-8714-6238de53aef1\") " pod="openshift-marketplace/community-operators-wqstp" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:43.127562 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b6705c-6ffb-4b4e-8714-6238de53aef1-catalog-content\") pod \"community-operators-wqstp\" (UID: \"05b6705c-6ffb-4b4e-8714-6238de53aef1\") " pod="openshift-marketplace/community-operators-wqstp" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:43.127698 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bks5x\" (UniqueName: \"kubernetes.io/projected/05b6705c-6ffb-4b4e-8714-6238de53aef1-kube-api-access-bks5x\") pod \"community-operators-wqstp\" (UID: \"05b6705c-6ffb-4b4e-8714-6238de53aef1\") " pod="openshift-marketplace/community-operators-wqstp" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:43.229125 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b6705c-6ffb-4b4e-8714-6238de53aef1-utilities\") pod \"community-operators-wqstp\" (UID: \"05b6705c-6ffb-4b4e-8714-6238de53aef1\") " pod="openshift-marketplace/community-operators-wqstp" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:43.229191 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b6705c-6ffb-4b4e-8714-6238de53aef1-catalog-content\") pod \"community-operators-wqstp\" (UID: \"05b6705c-6ffb-4b4e-8714-6238de53aef1\") " pod="openshift-marketplace/community-operators-wqstp" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:43.229244 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bks5x\" (UniqueName: \"kubernetes.io/projected/05b6705c-6ffb-4b4e-8714-6238de53aef1-kube-api-access-bks5x\") pod \"community-operators-wqstp\" (UID: \"05b6705c-6ffb-4b4e-8714-6238de53aef1\") " pod="openshift-marketplace/community-operators-wqstp" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:43.229774 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b6705c-6ffb-4b4e-8714-6238de53aef1-utilities\") pod \"community-operators-wqstp\" (UID: \"05b6705c-6ffb-4b4e-8714-6238de53aef1\") " pod="openshift-marketplace/community-operators-wqstp" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:43.229827 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b6705c-6ffb-4b4e-8714-6238de53aef1-catalog-content\") pod \"community-operators-wqstp\" (UID: \"05b6705c-6ffb-4b4e-8714-6238de53aef1\") " pod="openshift-marketplace/community-operators-wqstp" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:43.258628 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bks5x\" (UniqueName: \"kubernetes.io/projected/05b6705c-6ffb-4b4e-8714-6238de53aef1-kube-api-access-bks5x\") pod \"community-operators-wqstp\" (UID: \"05b6705c-6ffb-4b4e-8714-6238de53aef1\") " pod="openshift-marketplace/community-operators-wqstp" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:43.269382 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqstp" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:44.154291 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-755d7666d5-2zlnx"] Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:44.154439 3561 topology_manager.go:215] "Topology Admit Handler" podUID="f9dd874d-0d74-40c8-991f-a10b62bfb3df" podNamespace="cert-manager" podName="cert-manager-755d7666d5-2zlnx" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:44.155374 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-755d7666d5-2zlnx" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:44.157712 3561 reflector.go:351] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-s2q4v" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:44.164591 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-755d7666d5-2zlnx"] Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:44.308034 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9dd874d-0d74-40c8-991f-a10b62bfb3df-bound-sa-token\") pod \"cert-manager-755d7666d5-2zlnx\" (UID: \"f9dd874d-0d74-40c8-991f-a10b62bfb3df\") " pod="cert-manager/cert-manager-755d7666d5-2zlnx" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:44.308714 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btmg2\" (UniqueName: \"kubernetes.io/projected/f9dd874d-0d74-40c8-991f-a10b62bfb3df-kube-api-access-btmg2\") pod \"cert-manager-755d7666d5-2zlnx\" (UID: \"f9dd874d-0d74-40c8-991f-a10b62bfb3df\") " pod="cert-manager/cert-manager-755d7666d5-2zlnx" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:44.410098 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9dd874d-0d74-40c8-991f-a10b62bfb3df-bound-sa-token\") pod \"cert-manager-755d7666d5-2zlnx\" (UID: \"f9dd874d-0d74-40c8-991f-a10b62bfb3df\") " pod="cert-manager/cert-manager-755d7666d5-2zlnx" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:44.410164 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-btmg2\" (UniqueName: \"kubernetes.io/projected/f9dd874d-0d74-40c8-991f-a10b62bfb3df-kube-api-access-btmg2\") pod \"cert-manager-755d7666d5-2zlnx\" (UID: \"f9dd874d-0d74-40c8-991f-a10b62bfb3df\") " pod="cert-manager/cert-manager-755d7666d5-2zlnx" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:44.553739 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-btmg2\" (UniqueName: \"kubernetes.io/projected/f9dd874d-0d74-40c8-991f-a10b62bfb3df-kube-api-access-btmg2\") pod \"cert-manager-755d7666d5-2zlnx\" (UID: \"f9dd874d-0d74-40c8-991f-a10b62bfb3df\") " pod="cert-manager/cert-manager-755d7666d5-2zlnx" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:44.559903 3561 generic.go:334] "Generic (PLEG): container finished" podID="9a00a082-1937-4f6e-b9f3-b27db40ab02d" containerID="5b0eefe949b04a48d622ada04186b41e49a8e88c5b949a997cf1d2917ce4c012" exitCode=0 Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:44.559942 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"9a00a082-1937-4f6e-b9f3-b27db40ab02d","Type":"ContainerDied","Data":"5b0eefe949b04a48d622ada04186b41e49a8e88c5b949a997cf1d2917ce4c012"} Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:44.572126 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9dd874d-0d74-40c8-991f-a10b62bfb3df-bound-sa-token\") pod \"cert-manager-755d7666d5-2zlnx\" (UID: \"f9dd874d-0d74-40c8-991f-a10b62bfb3df\") " pod="cert-manager/cert-manager-755d7666d5-2zlnx" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:44.775531 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-755d7666d5-2zlnx" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:45.031455 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqstp"] Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:45.363617 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-755d7666d5-2zlnx"] Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:45.567018 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqstp" event={"ID":"05b6705c-6ffb-4b4e-8714-6238de53aef1","Type":"ContainerStarted","Data":"2ad1bd0acbcf409568dba16a6569fabc5e546b6879debb266ac728f7257f6e36"} Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:45.567794 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-755d7666d5-2zlnx" event={"ID":"f9dd874d-0d74-40c8-991f-a10b62bfb3df","Type":"ContainerStarted","Data":"df7e8ee46451774ecaf03076ec252799ecd7f1b0f5ef9822582bd2e6892ec8d3"} Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:46.586127 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"9a00a082-1937-4f6e-b9f3-b27db40ab02d","Type":"ContainerStarted","Data":"60a588af29f2f6e4a60158191f18f7f478e5592a7bbdcca5f20f2ad9a3ab7062"} Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:46.587808 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqstp" event={"ID":"05b6705c-6ffb-4b4e-8714-6238de53aef1","Type":"ContainerStarted","Data":"31c0ade272e239e3b31049887d1f35ea8871a4361f52a2ac1fcae23d2c15cdea"} Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:47.631075 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-755d7666d5-2zlnx" event={"ID":"f9dd874d-0d74-40c8-991f-a10b62bfb3df","Type":"ContainerStarted","Data":"7584780f860062db9aaf7a03db9945405ebad845b04ba012de53daeb6c6231b5"} Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:47.658254 3561 generic.go:334] "Generic (PLEG): container finished" podID="9a00a082-1937-4f6e-b9f3-b27db40ab02d" containerID="60a588af29f2f6e4a60158191f18f7f478e5592a7bbdcca5f20f2ad9a3ab7062" exitCode=0 Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:47.658369 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"9a00a082-1937-4f6e-b9f3-b27db40ab02d","Type":"ContainerDied","Data":"60a588af29f2f6e4a60158191f18f7f478e5592a7bbdcca5f20f2ad9a3ab7062"} Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:47.667468 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="cert-manager/cert-manager-755d7666d5-2zlnx" podStartSLOduration=3.6673970799999998 podStartE2EDuration="3.66739708s" podCreationTimestamp="2025-12-03 00:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:21:47.661725323 +0000 UTC m=+906.442159581" watchObservedRunningTime="2025-12-03 00:21:47.66739708 +0000 UTC m=+906.447831358" Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:47.668779 3561 generic.go:334] "Generic (PLEG): container finished" podID="05b6705c-6ffb-4b4e-8714-6238de53aef1" containerID="31c0ade272e239e3b31049887d1f35ea8871a4361f52a2ac1fcae23d2c15cdea" exitCode=0 Dec 03 00:21:47 crc kubenswrapper[3561]: I1203 00:21:47.674959 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqstp" event={"ID":"05b6705c-6ffb-4b4e-8714-6238de53aef1","Type":"ContainerDied","Data":"31c0ade272e239e3b31049887d1f35ea8871a4361f52a2ac1fcae23d2c15cdea"} Dec 03 00:21:48 crc kubenswrapper[3561]: I1203 00:21:48.736334 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqstp" event={"ID":"05b6705c-6ffb-4b4e-8714-6238de53aef1","Type":"ContainerStarted","Data":"ca086e2212a73d02384ecc099fe3c4aa95e10f8fb51c8d3e501763d65b4be2f0"} Dec 03 00:21:48 crc kubenswrapper[3561]: I1203 00:21:48.745981 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"9a00a082-1937-4f6e-b9f3-b27db40ab02d","Type":"ContainerStarted","Data":"395e1351d92fbf6d434b57c6040a56c16051813c41290a1b67638f1fa730e296"} Dec 03 00:21:48 crc kubenswrapper[3561]: I1203 00:21:48.791952 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=18.79189716 podStartE2EDuration="18.79189716s" podCreationTimestamp="2025-12-03 00:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:21:48.785647665 +0000 UTC m=+907.566081943" watchObservedRunningTime="2025-12-03 00:21:48.79189716 +0000 UTC m=+907.572331438" Dec 03 00:21:52 crc kubenswrapper[3561]: I1203 00:21:52.804459 3561 generic.go:334] "Generic (PLEG): container finished" podID="05b6705c-6ffb-4b4e-8714-6238de53aef1" containerID="ca086e2212a73d02384ecc099fe3c4aa95e10f8fb51c8d3e501763d65b4be2f0" exitCode=0 Dec 03 00:21:52 crc kubenswrapper[3561]: I1203 00:21:52.804557 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqstp" event={"ID":"05b6705c-6ffb-4b4e-8714-6238de53aef1","Type":"ContainerDied","Data":"ca086e2212a73d02384ecc099fe3c4aa95e10f8fb51c8d3e501763d65b4be2f0"} Dec 03 00:21:54 crc kubenswrapper[3561]: I1203 00:21:54.817977 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqstp" event={"ID":"05b6705c-6ffb-4b4e-8714-6238de53aef1","Type":"ContainerStarted","Data":"ecb3e346367f2b838d12af016a067b572d9b2b688da5784c4b3e8064831079b6"} Dec 03 00:21:54 crc kubenswrapper[3561]: I1203 00:21:54.864897 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wqstp" podStartSLOduration=7.406767121 podStartE2EDuration="12.864847319s" podCreationTimestamp="2025-12-03 00:21:42 +0000 UTC" firstStartedPulling="2025-12-03 00:21:47.670738394 +0000 UTC m=+906.451172652" lastFinishedPulling="2025-12-03 00:21:53.128818582 +0000 UTC m=+911.909252850" observedRunningTime="2025-12-03 00:21:54.862426543 +0000 UTC m=+913.642860801" watchObservedRunningTime="2025-12-03 00:21:54.864847319 +0000 UTC m=+913.645281587" Dec 03 00:22:03 crc kubenswrapper[3561]: I1203 00:22:03.269639 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wqstp" Dec 03 00:22:03 crc kubenswrapper[3561]: I1203 00:22:03.269969 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wqstp" Dec 03 00:22:03 crc kubenswrapper[3561]: I1203 00:22:03.457222 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wqstp" Dec 03 00:22:03 crc kubenswrapper[3561]: I1203 00:22:03.969275 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wqstp" Dec 03 00:22:04 crc kubenswrapper[3561]: I1203 00:22:04.015096 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqstp"] Dec 03 00:22:05 crc kubenswrapper[3561]: I1203 00:22:05.874721 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wqstp" podUID="05b6705c-6ffb-4b4e-8714-6238de53aef1" containerName="registry-server" containerID="cri-o://ecb3e346367f2b838d12af016a067b572d9b2b688da5784c4b3e8064831079b6" gracePeriod=2 Dec 03 00:22:07 crc kubenswrapper[3561]: I1203 00:22:07.888514 3561 generic.go:334] "Generic (PLEG): container finished" podID="05b6705c-6ffb-4b4e-8714-6238de53aef1" containerID="ecb3e346367f2b838d12af016a067b572d9b2b688da5784c4b3e8064831079b6" exitCode=0 Dec 03 00:22:07 crc kubenswrapper[3561]: I1203 00:22:07.888592 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqstp" event={"ID":"05b6705c-6ffb-4b4e-8714-6238de53aef1","Type":"ContainerDied","Data":"ecb3e346367f2b838d12af016a067b572d9b2b688da5784c4b3e8064831079b6"} Dec 03 00:22:08 crc kubenswrapper[3561]: I1203 00:22:08.414181 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqstp" Dec 03 00:22:08 crc kubenswrapper[3561]: I1203 00:22:08.617614 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b6705c-6ffb-4b4e-8714-6238de53aef1-utilities\") pod \"05b6705c-6ffb-4b4e-8714-6238de53aef1\" (UID: \"05b6705c-6ffb-4b4e-8714-6238de53aef1\") " Dec 03 00:22:08 crc kubenswrapper[3561]: I1203 00:22:08.617653 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bks5x\" (UniqueName: \"kubernetes.io/projected/05b6705c-6ffb-4b4e-8714-6238de53aef1-kube-api-access-bks5x\") pod \"05b6705c-6ffb-4b4e-8714-6238de53aef1\" (UID: \"05b6705c-6ffb-4b4e-8714-6238de53aef1\") " Dec 03 00:22:08 crc kubenswrapper[3561]: I1203 00:22:08.617719 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b6705c-6ffb-4b4e-8714-6238de53aef1-catalog-content\") pod \"05b6705c-6ffb-4b4e-8714-6238de53aef1\" (UID: \"05b6705c-6ffb-4b4e-8714-6238de53aef1\") " Dec 03 00:22:08 crc kubenswrapper[3561]: I1203 00:22:08.618508 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b6705c-6ffb-4b4e-8714-6238de53aef1-utilities" (OuterVolumeSpecName: "utilities") pod "05b6705c-6ffb-4b4e-8714-6238de53aef1" (UID: "05b6705c-6ffb-4b4e-8714-6238de53aef1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:22:08 crc kubenswrapper[3561]: I1203 00:22:08.622662 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b6705c-6ffb-4b4e-8714-6238de53aef1-kube-api-access-bks5x" (OuterVolumeSpecName: "kube-api-access-bks5x") pod "05b6705c-6ffb-4b4e-8714-6238de53aef1" (UID: "05b6705c-6ffb-4b4e-8714-6238de53aef1"). InnerVolumeSpecName "kube-api-access-bks5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:22:08 crc kubenswrapper[3561]: I1203 00:22:08.718672 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b6705c-6ffb-4b4e-8714-6238de53aef1-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:08 crc kubenswrapper[3561]: I1203 00:22:08.718713 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bks5x\" (UniqueName: \"kubernetes.io/projected/05b6705c-6ffb-4b4e-8714-6238de53aef1-kube-api-access-bks5x\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:08 crc kubenswrapper[3561]: I1203 00:22:08.896731 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqstp" event={"ID":"05b6705c-6ffb-4b4e-8714-6238de53aef1","Type":"ContainerDied","Data":"2ad1bd0acbcf409568dba16a6569fabc5e546b6879debb266ac728f7257f6e36"} Dec 03 00:22:08 crc kubenswrapper[3561]: I1203 00:22:08.896867 3561 scope.go:117] "RemoveContainer" containerID="ecb3e346367f2b838d12af016a067b572d9b2b688da5784c4b3e8064831079b6" Dec 03 00:22:08 crc kubenswrapper[3561]: I1203 00:22:08.897015 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqstp" Dec 03 00:22:08 crc kubenswrapper[3561]: I1203 00:22:08.929684 3561 scope.go:117] "RemoveContainer" containerID="ca086e2212a73d02384ecc099fe3c4aa95e10f8fb51c8d3e501763d65b4be2f0" Dec 03 00:22:09 crc kubenswrapper[3561]: I1203 00:22:09.001628 3561 scope.go:117] "RemoveContainer" containerID="31c0ade272e239e3b31049887d1f35ea8871a4361f52a2ac1fcae23d2c15cdea" Dec 03 00:22:09 crc kubenswrapper[3561]: I1203 00:22:09.150117 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b6705c-6ffb-4b4e-8714-6238de53aef1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05b6705c-6ffb-4b4e-8714-6238de53aef1" (UID: "05b6705c-6ffb-4b4e-8714-6238de53aef1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:22:09 crc kubenswrapper[3561]: I1203 00:22:09.226878 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b6705c-6ffb-4b4e-8714-6238de53aef1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:09 crc kubenswrapper[3561]: I1203 00:22:09.293164 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqstp"] Dec 03 00:22:09 crc kubenswrapper[3561]: I1203 00:22:09.308242 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wqstp"] Dec 03 00:22:09 crc kubenswrapper[3561]: I1203 00:22:09.671237 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05b6705c-6ffb-4b4e-8714-6238de53aef1" path="/var/lib/kubelet/pods/05b6705c-6ffb-4b4e-8714-6238de53aef1/volumes" Dec 03 00:22:41 crc kubenswrapper[3561]: I1203 00:22:41.521649 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:22:41 crc kubenswrapper[3561]: I1203 00:22:41.522219 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:22:41 crc kubenswrapper[3561]: I1203 00:22:41.522254 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:22:41 crc kubenswrapper[3561]: I1203 00:22:41.522291 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:22:41 crc kubenswrapper[3561]: I1203 00:22:41.522315 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:22:57 crc kubenswrapper[3561]: I1203 00:22:57.622762 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:22:57 crc kubenswrapper[3561]: I1203 00:22:57.623275 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:23:27 crc kubenswrapper[3561]: I1203 00:23:27.622784 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:23:27 crc kubenswrapper[3561]: I1203 00:23:27.623395 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:23:41 crc kubenswrapper[3561]: I1203 00:23:41.571376 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:23:41 crc kubenswrapper[3561]: I1203 00:23:41.571913 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:23:41 crc kubenswrapper[3561]: I1203 00:23:41.572013 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:23:41 crc kubenswrapper[3561]: I1203 00:23:41.572041 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:23:41 crc kubenswrapper[3561]: I1203 00:23:41.572068 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:23:46 crc kubenswrapper[3561]: I1203 00:23:46.728187 3561 generic.go:334] "Generic (PLEG): container finished" podID="9a00a082-1937-4f6e-b9f3-b27db40ab02d" containerID="395e1351d92fbf6d434b57c6040a56c16051813c41290a1b67638f1fa730e296" exitCode=0 Dec 03 00:23:46 crc kubenswrapper[3561]: I1203 00:23:46.728257 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"9a00a082-1937-4f6e-b9f3-b27db40ab02d","Type":"ContainerDied","Data":"395e1351d92fbf6d434b57c6040a56c16051813c41290a1b67638f1fa730e296"} Dec 03 00:23:47 crc kubenswrapper[3561]: I1203 00:23:47.991069 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.050751 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-blob-cache\") pod \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.050800 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-container-storage-run\") pod \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.050834 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/9a00a082-1937-4f6e-b9f3-b27db40ab02d-builder-dockercfg-6qmd9-push\") pod \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.050870 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-ca-bundles\") pod \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.050905 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9a00a082-1937-4f6e-b9f3-b27db40ab02d-node-pullsecrets\") pod \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.050943 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c68jb\" (UniqueName: \"kubernetes.io/projected/9a00a082-1937-4f6e-b9f3-b27db40ab02d-kube-api-access-c68jb\") pod \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.050968 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-container-storage-root\") pod \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.051961 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-system-configs\") pod \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.051989 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-proxy-ca-bundles\") pod \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.052013 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9a00a082-1937-4f6e-b9f3-b27db40ab02d-buildcachedir\") pod \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.051025 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a00a082-1937-4f6e-b9f3-b27db40ab02d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9a00a082-1937-4f6e-b9f3-b27db40ab02d" (UID: "9a00a082-1937-4f6e-b9f3-b27db40ab02d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.052035 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "9a00a082-1937-4f6e-b9f3-b27db40ab02d" (UID: "9a00a082-1937-4f6e-b9f3-b27db40ab02d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.052044 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/9a00a082-1937-4f6e-b9f3-b27db40ab02d-builder-dockercfg-6qmd9-pull\") pod \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.052101 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a00a082-1937-4f6e-b9f3-b27db40ab02d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "9a00a082-1937-4f6e-b9f3-b27db40ab02d" (UID: "9a00a082-1937-4f6e-b9f3-b27db40ab02d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.052143 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-buildworkdir\") pod \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\" (UID: \"9a00a082-1937-4f6e-b9f3-b27db40ab02d\") " Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.052507 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "9a00a082-1937-4f6e-b9f3-b27db40ab02d" (UID: "9a00a082-1937-4f6e-b9f3-b27db40ab02d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.052939 3561 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.052975 3561 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9a00a082-1937-4f6e-b9f3-b27db40ab02d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.052995 3561 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.053012 3561 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9a00a082-1937-4f6e-b9f3-b27db40ab02d-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.052932 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "9a00a082-1937-4f6e-b9f3-b27db40ab02d" (UID: "9a00a082-1937-4f6e-b9f3-b27db40ab02d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.056346 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a00a082-1937-4f6e-b9f3-b27db40ab02d-builder-dockercfg-6qmd9-pull" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-pull") pod "9a00a082-1937-4f6e-b9f3-b27db40ab02d" (UID: "9a00a082-1937-4f6e-b9f3-b27db40ab02d"). InnerVolumeSpecName "builder-dockercfg-6qmd9-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.056348 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a00a082-1937-4f6e-b9f3-b27db40ab02d-builder-dockercfg-6qmd9-push" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-push") pod "9a00a082-1937-4f6e-b9f3-b27db40ab02d" (UID: "9a00a082-1937-4f6e-b9f3-b27db40ab02d"). InnerVolumeSpecName "builder-dockercfg-6qmd9-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.056340 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a00a082-1937-4f6e-b9f3-b27db40ab02d-kube-api-access-c68jb" (OuterVolumeSpecName: "kube-api-access-c68jb") pod "9a00a082-1937-4f6e-b9f3-b27db40ab02d" (UID: "9a00a082-1937-4f6e-b9f3-b27db40ab02d"). InnerVolumeSpecName "kube-api-access-c68jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.068776 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "9a00a082-1937-4f6e-b9f3-b27db40ab02d" (UID: "9a00a082-1937-4f6e-b9f3-b27db40ab02d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.091835 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "9a00a082-1937-4f6e-b9f3-b27db40ab02d" (UID: "9a00a082-1937-4f6e-b9f3-b27db40ab02d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.153875 3561 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.153922 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/9a00a082-1937-4f6e-b9f3-b27db40ab02d-builder-dockercfg-6qmd9-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.153953 3561 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.153969 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.153982 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/9a00a082-1937-4f6e-b9f3-b27db40ab02d-builder-dockercfg-6qmd9-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.153995 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-c68jb\" (UniqueName: \"kubernetes.io/projected/9a00a082-1937-4f6e-b9f3-b27db40ab02d-kube-api-access-c68jb\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.232914 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "9a00a082-1937-4f6e-b9f3-b27db40ab02d" (UID: "9a00a082-1937-4f6e-b9f3-b27db40ab02d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.260718 3561 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.744335 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"9a00a082-1937-4f6e-b9f3-b27db40ab02d","Type":"ContainerDied","Data":"fa1f419e6ac1f1071762f340dee95f7551c35d86345fb6390419d40bd4736062"} Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.744698 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa1f419e6ac1f1071762f340dee95f7551c35d86345fb6390419d40bd4736062" Dec 03 00:23:48 crc kubenswrapper[3561]: I1203 00:23:48.744765 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:23:50 crc kubenswrapper[3561]: I1203 00:23:50.315656 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "9a00a082-1937-4f6e-b9f3-b27db40ab02d" (UID: "9a00a082-1937-4f6e-b9f3-b27db40ab02d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:23:50 crc kubenswrapper[3561]: I1203 00:23:50.388395 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9a00a082-1937-4f6e-b9f3-b27db40ab02d-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.166024 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.166393 3561 topology_manager.go:215] "Topology Admit Handler" podUID="49909792-503f-4cd8-9578-0c60a792664c" podNamespace="service-telemetry" podName="smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: E1203 00:23:53.166590 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9a00a082-1937-4f6e-b9f3-b27db40ab02d" containerName="docker-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.166605 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a00a082-1937-4f6e-b9f3-b27db40ab02d" containerName="docker-build" Dec 03 00:23:53 crc kubenswrapper[3561]: E1203 00:23:53.166614 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="05b6705c-6ffb-4b4e-8714-6238de53aef1" containerName="registry-server" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.166620 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b6705c-6ffb-4b4e-8714-6238de53aef1" containerName="registry-server" Dec 03 00:23:53 crc kubenswrapper[3561]: E1203 00:23:53.166631 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9a00a082-1937-4f6e-b9f3-b27db40ab02d" containerName="git-clone" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.166638 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a00a082-1937-4f6e-b9f3-b27db40ab02d" containerName="git-clone" Dec 03 00:23:53 crc kubenswrapper[3561]: E1203 00:23:53.166650 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="05b6705c-6ffb-4b4e-8714-6238de53aef1" containerName="extract-utilities" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.166657 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b6705c-6ffb-4b4e-8714-6238de53aef1" containerName="extract-utilities" Dec 03 00:23:53 crc kubenswrapper[3561]: E1203 00:23:53.166665 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9a00a082-1937-4f6e-b9f3-b27db40ab02d" containerName="manage-dockerfile" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.166671 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a00a082-1937-4f6e-b9f3-b27db40ab02d" containerName="manage-dockerfile" Dec 03 00:23:53 crc kubenswrapper[3561]: E1203 00:23:53.166681 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="05b6705c-6ffb-4b4e-8714-6238de53aef1" containerName="extract-content" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.166687 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b6705c-6ffb-4b4e-8714-6238de53aef1" containerName="extract-content" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.166812 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b6705c-6ffb-4b4e-8714-6238de53aef1" containerName="registry-server" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.166823 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a00a082-1937-4f6e-b9f3-b27db40ab02d" containerName="docker-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.167432 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.169859 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.169996 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.170955 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-6qmd9" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.180507 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.184813 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.224258 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.224622 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.224646 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.224682 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pz8h\" (UniqueName: \"kubernetes.io/projected/49909792-503f-4cd8-9578-0c60a792664c-kube-api-access-4pz8h\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.224766 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/49909792-503f-4cd8-9578-0c60a792664c-builder-dockercfg-6qmd9-push\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.224802 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.224822 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.224849 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/49909792-503f-4cd8-9578-0c60a792664c-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.224899 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.224928 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.224952 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/49909792-503f-4cd8-9578-0c60a792664c-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.224979 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/49909792-503f-4cd8-9578-0c60a792664c-builder-dockercfg-6qmd9-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.326206 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.326280 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.326315 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.326355 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4pz8h\" (UniqueName: \"kubernetes.io/projected/49909792-503f-4cd8-9578-0c60a792664c-kube-api-access-4pz8h\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.326389 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/49909792-503f-4cd8-9578-0c60a792664c-builder-dockercfg-6qmd9-push\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.326440 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.326835 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.326892 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.326988 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/49909792-503f-4cd8-9578-0c60a792664c-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.327026 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.327074 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.327105 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/49909792-503f-4cd8-9578-0c60a792664c-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.327121 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/49909792-503f-4cd8-9578-0c60a792664c-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.327161 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/49909792-503f-4cd8-9578-0c60a792664c-builder-dockercfg-6qmd9-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.327184 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.327255 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/49909792-503f-4cd8-9578-0c60a792664c-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.327273 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.327320 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.327459 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.327632 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.328144 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.332712 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/49909792-503f-4cd8-9578-0c60a792664c-builder-dockercfg-6qmd9-push\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.348271 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pz8h\" (UniqueName: \"kubernetes.io/projected/49909792-503f-4cd8-9578-0c60a792664c-kube-api-access-4pz8h\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.348333 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/49909792-503f-4cd8-9578-0c60a792664c-builder-dockercfg-6qmd9-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:53 crc kubenswrapper[3561]: I1203 00:23:53.494750 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:54 crc kubenswrapper[3561]: I1203 00:23:54.030134 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:23:54 crc kubenswrapper[3561]: I1203 00:23:54.780498 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"49909792-503f-4cd8-9578-0c60a792664c","Type":"ContainerStarted","Data":"2ba4e2411e21bd95dd9c3e66da700587741a307c42a286f4eff94962475d1978"} Dec 03 00:23:56 crc kubenswrapper[3561]: I1203 00:23:56.790442 3561 generic.go:334] "Generic (PLEG): container finished" podID="49909792-503f-4cd8-9578-0c60a792664c" containerID="ad5762220d214f2e7a43186797fc30211da071646a5b88a9cd17c3f4147f7fc2" exitCode=0 Dec 03 00:23:56 crc kubenswrapper[3561]: I1203 00:23:56.790532 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"49909792-503f-4cd8-9578-0c60a792664c","Type":"ContainerDied","Data":"ad5762220d214f2e7a43186797fc30211da071646a5b88a9cd17c3f4147f7fc2"} Dec 03 00:23:57 crc kubenswrapper[3561]: I1203 00:23:57.623432 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:23:57 crc kubenswrapper[3561]: I1203 00:23:57.623772 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:23:57 crc kubenswrapper[3561]: I1203 00:23:57.623813 3561 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:23:57 crc kubenswrapper[3561]: I1203 00:23:57.624623 3561 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad8f66b514709c53336fec531be5d7c0dc6b2b71864cfc0012b90c3d7284ceea"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:23:57 crc kubenswrapper[3561]: I1203 00:23:57.624790 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://ad8f66b514709c53336fec531be5d7c0dc6b2b71864cfc0012b90c3d7284ceea" gracePeriod=600 Dec 03 00:23:57 crc kubenswrapper[3561]: I1203 00:23:57.803976 3561 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="ad8f66b514709c53336fec531be5d7c0dc6b2b71864cfc0012b90c3d7284ceea" exitCode=0 Dec 03 00:23:57 crc kubenswrapper[3561]: I1203 00:23:57.804208 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"ad8f66b514709c53336fec531be5d7c0dc6b2b71864cfc0012b90c3d7284ceea"} Dec 03 00:23:57 crc kubenswrapper[3561]: I1203 00:23:57.804730 3561 scope.go:117] "RemoveContainer" containerID="c1be71d42620bb5792bd0a7738661749d3c399fe14e4bda9a97196271f69d892" Dec 03 00:23:57 crc kubenswrapper[3561]: I1203 00:23:57.808107 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"49909792-503f-4cd8-9578-0c60a792664c","Type":"ContainerStarted","Data":"9c2203b3958c7716e9c5f87751a3084eb3ae3603b0e5c57a628af9843d92a325"} Dec 03 00:23:57 crc kubenswrapper[3561]: I1203 00:23:57.843472 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=4.843427627 podStartE2EDuration="4.843427627s" podCreationTimestamp="2025-12-03 00:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:23:57.840898818 +0000 UTC m=+1036.621333096" watchObservedRunningTime="2025-12-03 00:23:57.843427627 +0000 UTC m=+1036.623861895" Dec 03 00:23:58 crc kubenswrapper[3561]: I1203 00:23:58.814036 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"bd361a7b70278e359bf6ff8b6fc3cdd347d465579a05530c586c68c0a2c94f31"} Dec 03 00:24:03 crc kubenswrapper[3561]: I1203 00:24:03.978268 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:24:03 crc kubenswrapper[3561]: I1203 00:24:03.978879 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="49909792-503f-4cd8-9578-0c60a792664c" containerName="docker-build" containerID="cri-o://9c2203b3958c7716e9c5f87751a3084eb3ae3603b0e5c57a628af9843d92a325" gracePeriod=30 Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.605404 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.605871 3561 topology_manager.go:215] "Topology Admit Handler" podUID="56b283a4-56a1-4526-b875-7a7e946d244a" podNamespace="service-telemetry" podName="smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.607048 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.609233 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.609293 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.610860 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.625993 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.788685 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/56b283a4-56a1-4526-b875-7a7e946d244a-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.788746 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.788821 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.788890 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56b283a4-56a1-4526-b875-7a7e946d244a-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.788937 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.788965 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq2vm\" (UniqueName: \"kubernetes.io/projected/56b283a4-56a1-4526-b875-7a7e946d244a-kube-api-access-zq2vm\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.789000 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/56b283a4-56a1-4526-b875-7a7e946d244a-builder-dockercfg-6qmd9-push\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.789035 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.789109 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.789162 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/56b283a4-56a1-4526-b875-7a7e946d244a-builder-dockercfg-6qmd9-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.789215 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.789237 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.890171 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.890237 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/56b283a4-56a1-4526-b875-7a7e946d244a-builder-dockercfg-6qmd9-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.890260 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.890282 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.890322 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/56b283a4-56a1-4526-b875-7a7e946d244a-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.890342 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.890364 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.890384 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56b283a4-56a1-4526-b875-7a7e946d244a-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.890410 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.890430 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-zq2vm\" (UniqueName: \"kubernetes.io/projected/56b283a4-56a1-4526-b875-7a7e946d244a-kube-api-access-zq2vm\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.890454 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/56b283a4-56a1-4526-b875-7a7e946d244a-builder-dockercfg-6qmd9-push\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.890476 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.890468 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/56b283a4-56a1-4526-b875-7a7e946d244a-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.890648 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56b283a4-56a1-4526-b875-7a7e946d244a-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.891062 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.891160 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.891287 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.891368 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.891498 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.891653 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.892107 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.896099 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/56b283a4-56a1-4526-b875-7a7e946d244a-builder-dockercfg-6qmd9-push\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.906356 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/56b283a4-56a1-4526-b875-7a7e946d244a-builder-dockercfg-6qmd9-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.912296 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq2vm\" (UniqueName: \"kubernetes.io/projected/56b283a4-56a1-4526-b875-7a7e946d244a-kube-api-access-zq2vm\") pod \"smart-gateway-operator-2-build\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:05 crc kubenswrapper[3561]: I1203 00:24:05.921290 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:06 crc kubenswrapper[3561]: I1203 00:24:06.513350 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Dec 03 00:24:06 crc kubenswrapper[3561]: I1203 00:24:06.864235 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_49909792-503f-4cd8-9578-0c60a792664c/docker-build/0.log" Dec 03 00:24:06 crc kubenswrapper[3561]: I1203 00:24:06.868784 3561 generic.go:334] "Generic (PLEG): container finished" podID="49909792-503f-4cd8-9578-0c60a792664c" containerID="9c2203b3958c7716e9c5f87751a3084eb3ae3603b0e5c57a628af9843d92a325" exitCode=1 Dec 03 00:24:06 crc kubenswrapper[3561]: I1203 00:24:06.868885 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"49909792-503f-4cd8-9578-0c60a792664c","Type":"ContainerDied","Data":"9c2203b3958c7716e9c5f87751a3084eb3ae3603b0e5c57a628af9843d92a325"} Dec 03 00:24:06 crc kubenswrapper[3561]: I1203 00:24:06.869839 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"56b283a4-56a1-4526-b875-7a7e946d244a","Type":"ContainerStarted","Data":"275b728db2e8e92edab97b815bb7e5b35e3fbe3de9bfab396734670ff44d4b60"} Dec 03 00:24:06 crc kubenswrapper[3561]: I1203 00:24:06.966969 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_49909792-503f-4cd8-9578-0c60a792664c/docker-build/0.log" Dec 03 00:24:06 crc kubenswrapper[3561]: I1203 00:24:06.967445 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.021934 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-ca-bundles\") pod \"49909792-503f-4cd8-9578-0c60a792664c\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.022031 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-system-configs\") pod \"49909792-503f-4cd8-9578-0c60a792664c\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.022062 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/49909792-503f-4cd8-9578-0c60a792664c-buildcachedir\") pod \"49909792-503f-4cd8-9578-0c60a792664c\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.022213 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-build-blob-cache\") pod \"49909792-503f-4cd8-9578-0c60a792664c\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.022268 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-proxy-ca-bundles\") pod \"49909792-503f-4cd8-9578-0c60a792664c\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.022324 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/49909792-503f-4cd8-9578-0c60a792664c-builder-dockercfg-6qmd9-pull\") pod \"49909792-503f-4cd8-9578-0c60a792664c\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.022302 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49909792-503f-4cd8-9578-0c60a792664c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "49909792-503f-4cd8-9578-0c60a792664c" (UID: "49909792-503f-4cd8-9578-0c60a792664c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.022346 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/49909792-503f-4cd8-9578-0c60a792664c-node-pullsecrets\") pod \"49909792-503f-4cd8-9578-0c60a792664c\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.022410 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-container-storage-run\") pod \"49909792-503f-4cd8-9578-0c60a792664c\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.022474 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pz8h\" (UniqueName: \"kubernetes.io/projected/49909792-503f-4cd8-9578-0c60a792664c-kube-api-access-4pz8h\") pod \"49909792-503f-4cd8-9578-0c60a792664c\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.022500 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-buildworkdir\") pod \"49909792-503f-4cd8-9578-0c60a792664c\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.022493 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49909792-503f-4cd8-9578-0c60a792664c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "49909792-503f-4cd8-9578-0c60a792664c" (UID: "49909792-503f-4cd8-9578-0c60a792664c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.022634 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-container-storage-root\") pod \"49909792-503f-4cd8-9578-0c60a792664c\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.022700 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/49909792-503f-4cd8-9578-0c60a792664c-builder-dockercfg-6qmd9-push\") pod \"49909792-503f-4cd8-9578-0c60a792664c\" (UID: \"49909792-503f-4cd8-9578-0c60a792664c\") " Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.022979 3561 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/49909792-503f-4cd8-9578-0c60a792664c-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.023228 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "49909792-503f-4cd8-9578-0c60a792664c" (UID: "49909792-503f-4cd8-9578-0c60a792664c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.022998 3561 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/49909792-503f-4cd8-9578-0c60a792664c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.023495 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "49909792-503f-4cd8-9578-0c60a792664c" (UID: "49909792-503f-4cd8-9578-0c60a792664c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.023702 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "49909792-503f-4cd8-9578-0c60a792664c" (UID: "49909792-503f-4cd8-9578-0c60a792664c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.023769 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "49909792-503f-4cd8-9578-0c60a792664c" (UID: "49909792-503f-4cd8-9578-0c60a792664c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.024740 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "49909792-503f-4cd8-9578-0c60a792664c" (UID: "49909792-503f-4cd8-9578-0c60a792664c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.027250 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49909792-503f-4cd8-9578-0c60a792664c-kube-api-access-4pz8h" (OuterVolumeSpecName: "kube-api-access-4pz8h") pod "49909792-503f-4cd8-9578-0c60a792664c" (UID: "49909792-503f-4cd8-9578-0c60a792664c"). InnerVolumeSpecName "kube-api-access-4pz8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.027273 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49909792-503f-4cd8-9578-0c60a792664c-builder-dockercfg-6qmd9-pull" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-pull") pod "49909792-503f-4cd8-9578-0c60a792664c" (UID: "49909792-503f-4cd8-9578-0c60a792664c"). InnerVolumeSpecName "builder-dockercfg-6qmd9-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.027479 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49909792-503f-4cd8-9578-0c60a792664c-builder-dockercfg-6qmd9-push" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-push") pod "49909792-503f-4cd8-9578-0c60a792664c" (UID: "49909792-503f-4cd8-9578-0c60a792664c"). InnerVolumeSpecName "builder-dockercfg-6qmd9-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.124491 3561 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.124530 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/49909792-503f-4cd8-9578-0c60a792664c-builder-dockercfg-6qmd9-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.124556 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.124568 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4pz8h\" (UniqueName: \"kubernetes.io/projected/49909792-503f-4cd8-9578-0c60a792664c-kube-api-access-4pz8h\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.124581 3561 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.124591 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/49909792-503f-4cd8-9578-0c60a792664c-builder-dockercfg-6qmd9-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.124601 3561 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.124610 3561 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/49909792-503f-4cd8-9578-0c60a792664c-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.221840 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "49909792-503f-4cd8-9578-0c60a792664c" (UID: "49909792-503f-4cd8-9578-0c60a792664c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.225488 3561 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.260534 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "49909792-503f-4cd8-9578-0c60a792664c" (UID: "49909792-503f-4cd8-9578-0c60a792664c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.327121 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/49909792-503f-4cd8-9578-0c60a792664c-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.877117 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"56b283a4-56a1-4526-b875-7a7e946d244a","Type":"ContainerStarted","Data":"f8ac7c56f65e2200f37e79b68a0e6c770ef6b13b0dad283ac1589e1f922c4f7c"} Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.878995 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_49909792-503f-4cd8-9578-0c60a792664c/docker-build/0.log" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.879391 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"49909792-503f-4cd8-9578-0c60a792664c","Type":"ContainerDied","Data":"2ba4e2411e21bd95dd9c3e66da700587741a307c42a286f4eff94962475d1978"} Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.879425 3561 scope.go:117] "RemoveContainer" containerID="9c2203b3958c7716e9c5f87751a3084eb3ae3603b0e5c57a628af9843d92a325" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.879432 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.957930 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.964664 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:24:07 crc kubenswrapper[3561]: I1203 00:24:07.970889 3561 scope.go:117] "RemoveContainer" containerID="ad5762220d214f2e7a43186797fc30211da071646a5b88a9cd17c3f4147f7fc2" Dec 03 00:24:08 crc kubenswrapper[3561]: I1203 00:24:08.885908 3561 generic.go:334] "Generic (PLEG): container finished" podID="56b283a4-56a1-4526-b875-7a7e946d244a" containerID="f8ac7c56f65e2200f37e79b68a0e6c770ef6b13b0dad283ac1589e1f922c4f7c" exitCode=0 Dec 03 00:24:08 crc kubenswrapper[3561]: I1203 00:24:08.885977 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"56b283a4-56a1-4526-b875-7a7e946d244a","Type":"ContainerDied","Data":"f8ac7c56f65e2200f37e79b68a0e6c770ef6b13b0dad283ac1589e1f922c4f7c"} Dec 03 00:24:09 crc kubenswrapper[3561]: I1203 00:24:09.677976 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49909792-503f-4cd8-9578-0c60a792664c" path="/var/lib/kubelet/pods/49909792-503f-4cd8-9578-0c60a792664c/volumes" Dec 03 00:24:09 crc kubenswrapper[3561]: I1203 00:24:09.895663 3561 generic.go:334] "Generic (PLEG): container finished" podID="56b283a4-56a1-4526-b875-7a7e946d244a" containerID="00f3c1116ba8cd3850d5e9df6af85d7575ed9bd721355ec6f1eb9616d8f434aa" exitCode=0 Dec 03 00:24:09 crc kubenswrapper[3561]: I1203 00:24:09.895719 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"56b283a4-56a1-4526-b875-7a7e946d244a","Type":"ContainerDied","Data":"00f3c1116ba8cd3850d5e9df6af85d7575ed9bd721355ec6f1eb9616d8f434aa"} Dec 03 00:24:09 crc kubenswrapper[3561]: I1203 00:24:09.938665 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_56b283a4-56a1-4526-b875-7a7e946d244a/manage-dockerfile/0.log" Dec 03 00:24:10 crc kubenswrapper[3561]: I1203 00:24:10.903465 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"56b283a4-56a1-4526-b875-7a7e946d244a","Type":"ContainerStarted","Data":"05932df61ce90942ca06a511f43a5666be2a618fa63de8a19f32f639731cd1a6"} Dec 03 00:24:10 crc kubenswrapper[3561]: I1203 00:24:10.942510 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=5.942466946 podStartE2EDuration="5.942466946s" podCreationTimestamp="2025-12-03 00:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:24:10.936760488 +0000 UTC m=+1049.717194746" watchObservedRunningTime="2025-12-03 00:24:10.942466946 +0000 UTC m=+1049.722901204" Dec 03 00:24:41 crc kubenswrapper[3561]: I1203 00:24:41.572380 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:24:41 crc kubenswrapper[3561]: I1203 00:24:41.572947 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:24:41 crc kubenswrapper[3561]: I1203 00:24:41.572989 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:24:41 crc kubenswrapper[3561]: I1203 00:24:41.573006 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:24:41 crc kubenswrapper[3561]: I1203 00:24:41.573027 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:25:35 crc kubenswrapper[3561]: I1203 00:25:35.403786 3561 generic.go:334] "Generic (PLEG): container finished" podID="56b283a4-56a1-4526-b875-7a7e946d244a" containerID="05932df61ce90942ca06a511f43a5666be2a618fa63de8a19f32f639731cd1a6" exitCode=0 Dec 03 00:25:35 crc kubenswrapper[3561]: I1203 00:25:35.403875 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"56b283a4-56a1-4526-b875-7a7e946d244a","Type":"ContainerDied","Data":"05932df61ce90942ca06a511f43a5666be2a618fa63de8a19f32f639731cd1a6"} Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.676819 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.770481 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/56b283a4-56a1-4526-b875-7a7e946d244a-builder-dockercfg-6qmd9-pull\") pod \"56b283a4-56a1-4526-b875-7a7e946d244a\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.770611 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-container-storage-run\") pod \"56b283a4-56a1-4526-b875-7a7e946d244a\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.770645 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-ca-bundles\") pod \"56b283a4-56a1-4526-b875-7a7e946d244a\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.770685 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-proxy-ca-bundles\") pod \"56b283a4-56a1-4526-b875-7a7e946d244a\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.770764 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-container-storage-root\") pod \"56b283a4-56a1-4526-b875-7a7e946d244a\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.770798 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/56b283a4-56a1-4526-b875-7a7e946d244a-builder-dockercfg-6qmd9-push\") pod \"56b283a4-56a1-4526-b875-7a7e946d244a\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.770844 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-buildworkdir\") pod \"56b283a4-56a1-4526-b875-7a7e946d244a\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.770877 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-system-configs\") pod \"56b283a4-56a1-4526-b875-7a7e946d244a\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.770912 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-build-blob-cache\") pod \"56b283a4-56a1-4526-b875-7a7e946d244a\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.770953 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq2vm\" (UniqueName: \"kubernetes.io/projected/56b283a4-56a1-4526-b875-7a7e946d244a-kube-api-access-zq2vm\") pod \"56b283a4-56a1-4526-b875-7a7e946d244a\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.770973 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56b283a4-56a1-4526-b875-7a7e946d244a-node-pullsecrets\") pod \"56b283a4-56a1-4526-b875-7a7e946d244a\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.771007 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/56b283a4-56a1-4526-b875-7a7e946d244a-buildcachedir\") pod \"56b283a4-56a1-4526-b875-7a7e946d244a\" (UID: \"56b283a4-56a1-4526-b875-7a7e946d244a\") " Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.771211 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56b283a4-56a1-4526-b875-7a7e946d244a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "56b283a4-56a1-4526-b875-7a7e946d244a" (UID: "56b283a4-56a1-4526-b875-7a7e946d244a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.771467 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "56b283a4-56a1-4526-b875-7a7e946d244a" (UID: "56b283a4-56a1-4526-b875-7a7e946d244a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.771488 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "56b283a4-56a1-4526-b875-7a7e946d244a" (UID: "56b283a4-56a1-4526-b875-7a7e946d244a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.771886 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "56b283a4-56a1-4526-b875-7a7e946d244a" (UID: "56b283a4-56a1-4526-b875-7a7e946d244a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.771933 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56b283a4-56a1-4526-b875-7a7e946d244a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "56b283a4-56a1-4526-b875-7a7e946d244a" (UID: "56b283a4-56a1-4526-b875-7a7e946d244a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.772167 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "56b283a4-56a1-4526-b875-7a7e946d244a" (UID: "56b283a4-56a1-4526-b875-7a7e946d244a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.773409 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "56b283a4-56a1-4526-b875-7a7e946d244a" (UID: "56b283a4-56a1-4526-b875-7a7e946d244a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.776572 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56b283a4-56a1-4526-b875-7a7e946d244a-builder-dockercfg-6qmd9-push" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-push") pod "56b283a4-56a1-4526-b875-7a7e946d244a" (UID: "56b283a4-56a1-4526-b875-7a7e946d244a"). InnerVolumeSpecName "builder-dockercfg-6qmd9-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.777653 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56b283a4-56a1-4526-b875-7a7e946d244a-kube-api-access-zq2vm" (OuterVolumeSpecName: "kube-api-access-zq2vm") pod "56b283a4-56a1-4526-b875-7a7e946d244a" (UID: "56b283a4-56a1-4526-b875-7a7e946d244a"). InnerVolumeSpecName "kube-api-access-zq2vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.779392 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56b283a4-56a1-4526-b875-7a7e946d244a-builder-dockercfg-6qmd9-pull" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-pull") pod "56b283a4-56a1-4526-b875-7a7e946d244a" (UID: "56b283a4-56a1-4526-b875-7a7e946d244a"). InnerVolumeSpecName "builder-dockercfg-6qmd9-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.872413 3561 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.872451 3561 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.872463 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/56b283a4-56a1-4526-b875-7a7e946d244a-builder-dockercfg-6qmd9-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.872475 3561 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.872485 3561 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/56b283a4-56a1-4526-b875-7a7e946d244a-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.872496 3561 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56b283a4-56a1-4526-b875-7a7e946d244a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.872505 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zq2vm\" (UniqueName: \"kubernetes.io/projected/56b283a4-56a1-4526-b875-7a7e946d244a-kube-api-access-zq2vm\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.872514 3561 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/56b283a4-56a1-4526-b875-7a7e946d244a-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.872528 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/56b283a4-56a1-4526-b875-7a7e946d244a-builder-dockercfg-6qmd9-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.873219 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:36 crc kubenswrapper[3561]: I1203 00:25:36.974718 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "56b283a4-56a1-4526-b875-7a7e946d244a" (UID: "56b283a4-56a1-4526-b875-7a7e946d244a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:25:37 crc kubenswrapper[3561]: I1203 00:25:37.075783 3561 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:37 crc kubenswrapper[3561]: I1203 00:25:37.419669 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"56b283a4-56a1-4526-b875-7a7e946d244a","Type":"ContainerDied","Data":"275b728db2e8e92edab97b815bb7e5b35e3fbe3de9bfab396734670ff44d4b60"} Dec 03 00:25:37 crc kubenswrapper[3561]: I1203 00:25:37.419973 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="275b728db2e8e92edab97b815bb7e5b35e3fbe3de9bfab396734670ff44d4b60" Dec 03 00:25:37 crc kubenswrapper[3561]: I1203 00:25:37.419809 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:25:39 crc kubenswrapper[3561]: I1203 00:25:39.065458 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "56b283a4-56a1-4526-b875-7a7e946d244a" (UID: "56b283a4-56a1-4526-b875-7a7e946d244a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:25:39 crc kubenswrapper[3561]: I1203 00:25:39.101100 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/56b283a4-56a1-4526-b875-7a7e946d244a-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.360067 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.360406 3561 topology_manager.go:215] "Topology Admit Handler" podUID="b530ceda-e874-4d23-b4f4-486114066a57" podNamespace="service-telemetry" podName="sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: E1203 00:25:41.360581 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="49909792-503f-4cd8-9578-0c60a792664c" containerName="manage-dockerfile" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.360610 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="49909792-503f-4cd8-9578-0c60a792664c" containerName="manage-dockerfile" Dec 03 00:25:41 crc kubenswrapper[3561]: E1203 00:25:41.360623 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="56b283a4-56a1-4526-b875-7a7e946d244a" containerName="git-clone" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.360629 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="56b283a4-56a1-4526-b875-7a7e946d244a" containerName="git-clone" Dec 03 00:25:41 crc kubenswrapper[3561]: E1203 00:25:41.360642 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="56b283a4-56a1-4526-b875-7a7e946d244a" containerName="manage-dockerfile" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.360648 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="56b283a4-56a1-4526-b875-7a7e946d244a" containerName="manage-dockerfile" Dec 03 00:25:41 crc kubenswrapper[3561]: E1203 00:25:41.360661 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="49909792-503f-4cd8-9578-0c60a792664c" containerName="docker-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.360667 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="49909792-503f-4cd8-9578-0c60a792664c" containerName="docker-build" Dec 03 00:25:41 crc kubenswrapper[3561]: E1203 00:25:41.360674 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="56b283a4-56a1-4526-b875-7a7e946d244a" containerName="docker-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.360680 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="56b283a4-56a1-4526-b875-7a7e946d244a" containerName="docker-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.360818 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="49909792-503f-4cd8-9578-0c60a792664c" containerName="docker-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.360829 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="56b283a4-56a1-4526-b875-7a7e946d244a" containerName="docker-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.361406 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.363118 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.364101 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.366721 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-6qmd9" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.367118 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.379366 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.431481 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b530ceda-e874-4d23-b4f4-486114066a57-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.431556 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxnlx\" (UniqueName: \"kubernetes.io/projected/b530ceda-e874-4d23-b4f4-486114066a57-kube-api-access-hxnlx\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.431585 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b530ceda-e874-4d23-b4f4-486114066a57-buildcachedir\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.431755 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/b530ceda-e874-4d23-b4f4-486114066a57-builder-dockercfg-6qmd9-push\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.431911 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-container-storage-run\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.431986 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.432082 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-buildworkdir\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.432109 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/b530ceda-e874-4d23-b4f4-486114066a57-builder-dockercfg-6qmd9-pull\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.432143 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.432229 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-system-configs\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.432379 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.432491 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-container-storage-root\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.533704 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-buildworkdir\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.533758 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/b530ceda-e874-4d23-b4f4-486114066a57-builder-dockercfg-6qmd9-pull\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.533779 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.533798 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-system-configs\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.533857 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.534952 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-system-configs\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.535040 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-container-storage-root\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.535114 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.535298 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-container-storage-root\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.535372 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-buildworkdir\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.535416 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b530ceda-e874-4d23-b4f4-486114066a57-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.535446 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hxnlx\" (UniqueName: \"kubernetes.io/projected/b530ceda-e874-4d23-b4f4-486114066a57-kube-api-access-hxnlx\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.535471 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b530ceda-e874-4d23-b4f4-486114066a57-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.535484 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b530ceda-e874-4d23-b4f4-486114066a57-buildcachedir\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.535506 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/b530ceda-e874-4d23-b4f4-486114066a57-builder-dockercfg-6qmd9-push\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.535569 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b530ceda-e874-4d23-b4f4-486114066a57-buildcachedir\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.535628 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-container-storage-run\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.535646 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.535675 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.535932 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-container-storage-run\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.536154 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.540067 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/b530ceda-e874-4d23-b4f4-486114066a57-builder-dockercfg-6qmd9-push\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.541779 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/b530ceda-e874-4d23-b4f4-486114066a57-builder-dockercfg-6qmd9-pull\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.558942 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxnlx\" (UniqueName: \"kubernetes.io/projected/b530ceda-e874-4d23-b4f4-486114066a57-kube-api-access-hxnlx\") pod \"sg-core-1-build\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.573576 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.573646 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.573712 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.573759 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.573785 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.678211 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 03 00:25:41 crc kubenswrapper[3561]: I1203 00:25:41.883826 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:25:42 crc kubenswrapper[3561]: I1203 00:25:42.445613 3561 generic.go:334] "Generic (PLEG): container finished" podID="b530ceda-e874-4d23-b4f4-486114066a57" containerID="4c0a5d920e5ca3e81e371603a5a4cff33da369876908e88ffed54b38969c6bbd" exitCode=0 Dec 03 00:25:42 crc kubenswrapper[3561]: I1203 00:25:42.445894 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"b530ceda-e874-4d23-b4f4-486114066a57","Type":"ContainerDied","Data":"4c0a5d920e5ca3e81e371603a5a4cff33da369876908e88ffed54b38969c6bbd"} Dec 03 00:25:42 crc kubenswrapper[3561]: I1203 00:25:42.445954 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"b530ceda-e874-4d23-b4f4-486114066a57","Type":"ContainerStarted","Data":"8e39967ef63eb1a7be36dd8c4a518761808bb2ffd82864a513391515dd8dfc88"} Dec 03 00:25:43 crc kubenswrapper[3561]: I1203 00:25:43.455801 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"b530ceda-e874-4d23-b4f4-486114066a57","Type":"ContainerStarted","Data":"863b4c7fb466e682bedd993242892fff7c693fd6a84895b9f028d2e08324f298"} Dec 03 00:25:43 crc kubenswrapper[3561]: I1203 00:25:43.487977 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=2.487904933 podStartE2EDuration="2.487904933s" podCreationTimestamp="2025-12-03 00:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:25:43.482529425 +0000 UTC m=+1142.262963683" watchObservedRunningTime="2025-12-03 00:25:43.487904933 +0000 UTC m=+1142.268339201" Dec 03 00:25:51 crc kubenswrapper[3561]: I1203 00:25:51.676795 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:25:51 crc kubenswrapper[3561]: I1203 00:25:51.677631 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="b530ceda-e874-4d23-b4f4-486114066a57" containerName="docker-build" containerID="cri-o://863b4c7fb466e682bedd993242892fff7c693fd6a84895b9f028d2e08324f298" gracePeriod=30 Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.086114 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_b530ceda-e874-4d23-b4f4-486114066a57/docker-build/0.log" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.086878 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.126920 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b530ceda-e874-4d23-b4f4-486114066a57" (UID: "b530ceda-e874-4d23-b4f4-486114066a57"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.127042 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-container-storage-run\") pod \"b530ceda-e874-4d23-b4f4-486114066a57\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.127124 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-proxy-ca-bundles\") pod \"b530ceda-e874-4d23-b4f4-486114066a57\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.127191 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-build-blob-cache\") pod \"b530ceda-e874-4d23-b4f4-486114066a57\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.127260 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/b530ceda-e874-4d23-b4f4-486114066a57-builder-dockercfg-6qmd9-pull\") pod \"b530ceda-e874-4d23-b4f4-486114066a57\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.127343 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-ca-bundles\") pod \"b530ceda-e874-4d23-b4f4-486114066a57\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.127426 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/b530ceda-e874-4d23-b4f4-486114066a57-builder-dockercfg-6qmd9-push\") pod \"b530ceda-e874-4d23-b4f4-486114066a57\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.127512 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-system-configs\") pod \"b530ceda-e874-4d23-b4f4-486114066a57\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.127623 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b530ceda-e874-4d23-b4f4-486114066a57-buildcachedir\") pod \"b530ceda-e874-4d23-b4f4-486114066a57\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.127709 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-buildworkdir\") pod \"b530ceda-e874-4d23-b4f4-486114066a57\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.127786 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b530ceda-e874-4d23-b4f4-486114066a57-node-pullsecrets\") pod \"b530ceda-e874-4d23-b4f4-486114066a57\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.127889 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b530ceda-e874-4d23-b4f4-486114066a57-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b530ceda-e874-4d23-b4f4-486114066a57" (UID: "b530ceda-e874-4d23-b4f4-486114066a57"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.128297 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b530ceda-e874-4d23-b4f4-486114066a57" (UID: "b530ceda-e874-4d23-b4f4-486114066a57"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.128374 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b530ceda-e874-4d23-b4f4-486114066a57" (UID: "b530ceda-e874-4d23-b4f4-486114066a57"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.128457 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b530ceda-e874-4d23-b4f4-486114066a57" (UID: "b530ceda-e874-4d23-b4f4-486114066a57"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.128488 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b530ceda-e874-4d23-b4f4-486114066a57-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b530ceda-e874-4d23-b4f4-486114066a57" (UID: "b530ceda-e874-4d23-b4f4-486114066a57"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.128855 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b530ceda-e874-4d23-b4f4-486114066a57" (UID: "b530ceda-e874-4d23-b4f4-486114066a57"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.128935 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-container-storage-root\") pod \"b530ceda-e874-4d23-b4f4-486114066a57\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.128971 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxnlx\" (UniqueName: \"kubernetes.io/projected/b530ceda-e874-4d23-b4f4-486114066a57-kube-api-access-hxnlx\") pod \"b530ceda-e874-4d23-b4f4-486114066a57\" (UID: \"b530ceda-e874-4d23-b4f4-486114066a57\") " Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.129239 3561 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.129255 3561 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.129268 3561 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b530ceda-e874-4d23-b4f4-486114066a57-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.129281 3561 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.129295 3561 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b530ceda-e874-4d23-b4f4-486114066a57-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.129308 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.129322 3561 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b530ceda-e874-4d23-b4f4-486114066a57-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.132818 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b530ceda-e874-4d23-b4f4-486114066a57-kube-api-access-hxnlx" (OuterVolumeSpecName: "kube-api-access-hxnlx") pod "b530ceda-e874-4d23-b4f4-486114066a57" (UID: "b530ceda-e874-4d23-b4f4-486114066a57"). InnerVolumeSpecName "kube-api-access-hxnlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.133048 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b530ceda-e874-4d23-b4f4-486114066a57-builder-dockercfg-6qmd9-pull" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-pull") pod "b530ceda-e874-4d23-b4f4-486114066a57" (UID: "b530ceda-e874-4d23-b4f4-486114066a57"). InnerVolumeSpecName "builder-dockercfg-6qmd9-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.133663 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b530ceda-e874-4d23-b4f4-486114066a57-builder-dockercfg-6qmd9-push" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-push") pod "b530ceda-e874-4d23-b4f4-486114066a57" (UID: "b530ceda-e874-4d23-b4f4-486114066a57"). InnerVolumeSpecName "builder-dockercfg-6qmd9-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.230864 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hxnlx\" (UniqueName: \"kubernetes.io/projected/b530ceda-e874-4d23-b4f4-486114066a57-kube-api-access-hxnlx\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.230929 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/b530ceda-e874-4d23-b4f4-486114066a57-builder-dockercfg-6qmd9-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.230954 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/b530ceda-e874-4d23-b4f4-486114066a57-builder-dockercfg-6qmd9-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.235649 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b530ceda-e874-4d23-b4f4-486114066a57" (UID: "b530ceda-e874-4d23-b4f4-486114066a57"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.248072 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b530ceda-e874-4d23-b4f4-486114066a57" (UID: "b530ceda-e874-4d23-b4f4-486114066a57"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.332254 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.332524 3561 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b530ceda-e874-4d23-b4f4-486114066a57-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.511531 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_b530ceda-e874-4d23-b4f4-486114066a57/docker-build/0.log" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.512121 3561 generic.go:334] "Generic (PLEG): container finished" podID="b530ceda-e874-4d23-b4f4-486114066a57" containerID="863b4c7fb466e682bedd993242892fff7c693fd6a84895b9f028d2e08324f298" exitCode=1 Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.512170 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"b530ceda-e874-4d23-b4f4-486114066a57","Type":"ContainerDied","Data":"863b4c7fb466e682bedd993242892fff7c693fd6a84895b9f028d2e08324f298"} Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.512174 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.512205 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"b530ceda-e874-4d23-b4f4-486114066a57","Type":"ContainerDied","Data":"8e39967ef63eb1a7be36dd8c4a518761808bb2ffd82864a513391515dd8dfc88"} Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.512258 3561 scope.go:117] "RemoveContainer" containerID="863b4c7fb466e682bedd993242892fff7c693fd6a84895b9f028d2e08324f298" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.560464 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.568414 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.577236 3561 scope.go:117] "RemoveContainer" containerID="4c0a5d920e5ca3e81e371603a5a4cff33da369876908e88ffed54b38969c6bbd" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.615719 3561 scope.go:117] "RemoveContainer" containerID="863b4c7fb466e682bedd993242892fff7c693fd6a84895b9f028d2e08324f298" Dec 03 00:25:53 crc kubenswrapper[3561]: E1203 00:25:53.616207 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"863b4c7fb466e682bedd993242892fff7c693fd6a84895b9f028d2e08324f298\": container with ID starting with 863b4c7fb466e682bedd993242892fff7c693fd6a84895b9f028d2e08324f298 not found: ID does not exist" containerID="863b4c7fb466e682bedd993242892fff7c693fd6a84895b9f028d2e08324f298" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.616265 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"863b4c7fb466e682bedd993242892fff7c693fd6a84895b9f028d2e08324f298"} err="failed to get container status \"863b4c7fb466e682bedd993242892fff7c693fd6a84895b9f028d2e08324f298\": rpc error: code = NotFound desc = could not find container \"863b4c7fb466e682bedd993242892fff7c693fd6a84895b9f028d2e08324f298\": container with ID starting with 863b4c7fb466e682bedd993242892fff7c693fd6a84895b9f028d2e08324f298 not found: ID does not exist" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.616284 3561 scope.go:117] "RemoveContainer" containerID="4c0a5d920e5ca3e81e371603a5a4cff33da369876908e88ffed54b38969c6bbd" Dec 03 00:25:53 crc kubenswrapper[3561]: E1203 00:25:53.616627 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c0a5d920e5ca3e81e371603a5a4cff33da369876908e88ffed54b38969c6bbd\": container with ID starting with 4c0a5d920e5ca3e81e371603a5a4cff33da369876908e88ffed54b38969c6bbd not found: ID does not exist" containerID="4c0a5d920e5ca3e81e371603a5a4cff33da369876908e88ffed54b38969c6bbd" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.616668 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c0a5d920e5ca3e81e371603a5a4cff33da369876908e88ffed54b38969c6bbd"} err="failed to get container status \"4c0a5d920e5ca3e81e371603a5a4cff33da369876908e88ffed54b38969c6bbd\": rpc error: code = NotFound desc = could not find container \"4c0a5d920e5ca3e81e371603a5a4cff33da369876908e88ffed54b38969c6bbd\": container with ID starting with 4c0a5d920e5ca3e81e371603a5a4cff33da369876908e88ffed54b38969c6bbd not found: ID does not exist" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.670364 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b530ceda-e874-4d23-b4f4-486114066a57" path="/var/lib/kubelet/pods/b530ceda-e874-4d23-b4f4-486114066a57/volumes" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.758827 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.758949 3561 topology_manager.go:215] "Topology Admit Handler" podUID="afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" podNamespace="service-telemetry" podName="sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: E1203 00:25:53.759089 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b530ceda-e874-4d23-b4f4-486114066a57" containerName="docker-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.759101 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="b530ceda-e874-4d23-b4f4-486114066a57" containerName="docker-build" Dec 03 00:25:53 crc kubenswrapper[3561]: E1203 00:25:53.759123 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b530ceda-e874-4d23-b4f4-486114066a57" containerName="manage-dockerfile" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.759130 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="b530ceda-e874-4d23-b4f4-486114066a57" containerName="manage-dockerfile" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.759237 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="b530ceda-e874-4d23-b4f4-486114066a57" containerName="docker-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.760079 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.764382 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-6qmd9" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.764609 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.765160 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.765298 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.775973 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.839122 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.839418 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.839515 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h8gg\" (UniqueName: \"kubernetes.io/projected/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-kube-api-access-6h8gg\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.839685 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-builder-dockercfg-6qmd9-pull\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.839938 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.840108 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-buildworkdir\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.840203 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-container-storage-root\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.840270 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-builder-dockercfg-6qmd9-push\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.840317 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-container-storage-run\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.840428 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-system-configs\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.840581 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.840685 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-buildcachedir\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.942219 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.942311 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-buildworkdir\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.942745 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-container-storage-root\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.942902 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-buildworkdir\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.943034 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-builder-dockercfg-6qmd9-push\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.943274 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-container-storage-run\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.943372 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-container-storage-root\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.943514 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-system-configs\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.943626 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.943607 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-container-storage-run\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.943802 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.943908 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-buildcachedir\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.943999 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.944135 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.944236 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6h8gg\" (UniqueName: \"kubernetes.io/projected/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-kube-api-access-6h8gg\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.944323 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-buildcachedir\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.944251 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-system-configs\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.944265 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.944317 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.946378 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.946820 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-builder-dockercfg-6qmd9-pull\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.949265 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-builder-dockercfg-6qmd9-push\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.952137 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-builder-dockercfg-6qmd9-pull\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:53 crc kubenswrapper[3561]: I1203 00:25:53.971983 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h8gg\" (UniqueName: \"kubernetes.io/projected/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-kube-api-access-6h8gg\") pod \"sg-core-2-build\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:54 crc kubenswrapper[3561]: I1203 00:25:54.073758 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 03 00:25:54 crc kubenswrapper[3561]: I1203 00:25:54.275976 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Dec 03 00:25:54 crc kubenswrapper[3561]: I1203 00:25:54.521467 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9","Type":"ContainerStarted","Data":"6f23467cfbdf0f5292d753c88e902dd78fcfc3213457629e35535634a5feee50"} Dec 03 00:25:55 crc kubenswrapper[3561]: I1203 00:25:55.531621 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9","Type":"ContainerStarted","Data":"f189ba58b54d1cd6b74f22bb4ce9c88f31da3b469bb6ef5de552e8f5a17009b7"} Dec 03 00:25:56 crc kubenswrapper[3561]: I1203 00:25:56.537298 3561 generic.go:334] "Generic (PLEG): container finished" podID="afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" containerID="f189ba58b54d1cd6b74f22bb4ce9c88f31da3b469bb6ef5de552e8f5a17009b7" exitCode=0 Dec 03 00:25:56 crc kubenswrapper[3561]: I1203 00:25:56.537388 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9","Type":"ContainerDied","Data":"f189ba58b54d1cd6b74f22bb4ce9c88f31da3b469bb6ef5de552e8f5a17009b7"} Dec 03 00:25:57 crc kubenswrapper[3561]: I1203 00:25:57.546857 3561 generic.go:334] "Generic (PLEG): container finished" podID="afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" containerID="ce9aa0a8db64252820fd304af390bcdb6734e1834293e5bd7263c78ee71c9a19" exitCode=0 Dec 03 00:25:57 crc kubenswrapper[3561]: I1203 00:25:57.547045 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9","Type":"ContainerDied","Data":"ce9aa0a8db64252820fd304af390bcdb6734e1834293e5bd7263c78ee71c9a19"} Dec 03 00:25:57 crc kubenswrapper[3561]: I1203 00:25:57.579689 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_afd3188f-2fcd-4ca9-8557-cd2b2134b3e9/manage-dockerfile/0.log" Dec 03 00:25:58 crc kubenswrapper[3561]: I1203 00:25:58.596783 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9","Type":"ContainerStarted","Data":"1fda6703684b5b8cb93fcd63fc37320ade2cd9eae22276dd42924098b802e63a"} Dec 03 00:25:58 crc kubenswrapper[3561]: I1203 00:25:58.629517 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.62946499 podStartE2EDuration="5.62946499s" podCreationTimestamp="2025-12-03 00:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:25:58.619718916 +0000 UTC m=+1157.400153174" watchObservedRunningTime="2025-12-03 00:25:58.62946499 +0000 UTC m=+1157.409899258" Dec 03 00:26:27 crc kubenswrapper[3561]: I1203 00:26:27.623244 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:26:27 crc kubenswrapper[3561]: I1203 00:26:27.623733 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:26:41 crc kubenswrapper[3561]: I1203 00:26:41.575035 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:26:41 crc kubenswrapper[3561]: I1203 00:26:41.575691 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:26:41 crc kubenswrapper[3561]: I1203 00:26:41.575770 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:26:41 crc kubenswrapper[3561]: I1203 00:26:41.575811 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:26:41 crc kubenswrapper[3561]: I1203 00:26:41.575840 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:26:57 crc kubenswrapper[3561]: I1203 00:26:57.624822 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:26:57 crc kubenswrapper[3561]: I1203 00:26:57.625426 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:27:27 crc kubenswrapper[3561]: I1203 00:27:27.623606 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:27:27 crc kubenswrapper[3561]: I1203 00:27:27.624124 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:27:27 crc kubenswrapper[3561]: I1203 00:27:27.624165 3561 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:27:27 crc kubenswrapper[3561]: I1203 00:27:27.625099 3561 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd361a7b70278e359bf6ff8b6fc3cdd347d465579a05530c586c68c0a2c94f31"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:27:27 crc kubenswrapper[3561]: I1203 00:27:27.625256 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://bd361a7b70278e359bf6ff8b6fc3cdd347d465579a05530c586c68c0a2c94f31" gracePeriod=600 Dec 03 00:27:29 crc kubenswrapper[3561]: I1203 00:27:29.380290 3561 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="bd361a7b70278e359bf6ff8b6fc3cdd347d465579a05530c586c68c0a2c94f31" exitCode=0 Dec 03 00:27:29 crc kubenswrapper[3561]: I1203 00:27:29.380351 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"bd361a7b70278e359bf6ff8b6fc3cdd347d465579a05530c586c68c0a2c94f31"} Dec 03 00:27:29 crc kubenswrapper[3561]: I1203 00:27:29.381194 3561 scope.go:117] "RemoveContainer" containerID="ad8f66b514709c53336fec531be5d7c0dc6b2b71864cfc0012b90c3d7284ceea" Dec 03 00:27:30 crc kubenswrapper[3561]: I1203 00:27:30.388800 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"f2fe0358891523ffb3867571645f1222796bef04cd6a75ab1c3e21ae15e72601"} Dec 03 00:27:41 crc kubenswrapper[3561]: I1203 00:27:41.576074 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:27:41 crc kubenswrapper[3561]: I1203 00:27:41.576589 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:27:41 crc kubenswrapper[3561]: I1203 00:27:41.576622 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:27:41 crc kubenswrapper[3561]: I1203 00:27:41.576656 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:27:41 crc kubenswrapper[3561]: I1203 00:27:41.576678 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:28:41 crc kubenswrapper[3561]: I1203 00:28:41.577227 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:28:41 crc kubenswrapper[3561]: I1203 00:28:41.577828 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:28:41 crc kubenswrapper[3561]: I1203 00:28:41.577874 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:28:41 crc kubenswrapper[3561]: I1203 00:28:41.577899 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:28:41 crc kubenswrapper[3561]: I1203 00:28:41.577939 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:29:22 crc kubenswrapper[3561]: I1203 00:29:22.121212 3561 generic.go:334] "Generic (PLEG): container finished" podID="afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" containerID="1fda6703684b5b8cb93fcd63fc37320ade2cd9eae22276dd42924098b802e63a" exitCode=0 Dec 03 00:29:22 crc kubenswrapper[3561]: I1203 00:29:22.121356 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9","Type":"ContainerDied","Data":"1fda6703684b5b8cb93fcd63fc37320ade2cd9eae22276dd42924098b802e63a"} Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.399520 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.516185 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-blob-cache\") pod \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.516252 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-container-storage-run\") pod \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.516305 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-builder-dockercfg-6qmd9-push\") pod \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.516335 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-proxy-ca-bundles\") pod \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.516385 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-node-pullsecrets\") pod \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.516419 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-ca-bundles\") pod \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.516463 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h8gg\" (UniqueName: \"kubernetes.io/projected/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-kube-api-access-6h8gg\") pod \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.516489 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-buildworkdir\") pod \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.516512 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" (UID: "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.516534 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-builder-dockercfg-6qmd9-pull\") pod \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.516618 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-buildcachedir\") pod \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.516652 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-container-storage-root\") pod \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.516672 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-system-configs\") pod \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\" (UID: \"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9\") " Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.516868 3561 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.517185 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" (UID: "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.517446 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" (UID: "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.517653 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" (UID: "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.517777 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" (UID: "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.518162 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" (UID: "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.522685 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-kube-api-access-6h8gg" (OuterVolumeSpecName: "kube-api-access-6h8gg") pod "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" (UID: "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9"). InnerVolumeSpecName "kube-api-access-6h8gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.523209 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-builder-dockercfg-6qmd9-pull" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-pull") pod "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" (UID: "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9"). InnerVolumeSpecName "builder-dockercfg-6qmd9-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.526637 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-builder-dockercfg-6qmd9-push" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-push") pod "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" (UID: "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9"). InnerVolumeSpecName "builder-dockercfg-6qmd9-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.529743 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" (UID: "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.651811 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-builder-dockercfg-6qmd9-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.651848 3561 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.651863 3561 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.651876 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6h8gg\" (UniqueName: \"kubernetes.io/projected/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-kube-api-access-6h8gg\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.651892 3561 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.651905 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-builder-dockercfg-6qmd9-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.651918 3561 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.651931 3561 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.651943 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.809658 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" (UID: "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:23 crc kubenswrapper[3561]: I1203 00:29:23.855135 3561 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:24 crc kubenswrapper[3561]: I1203 00:29:24.140158 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"afd3188f-2fcd-4ca9-8557-cd2b2134b3e9","Type":"ContainerDied","Data":"6f23467cfbdf0f5292d753c88e902dd78fcfc3213457629e35535634a5feee50"} Dec 03 00:29:24 crc kubenswrapper[3561]: I1203 00:29:24.140212 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f23467cfbdf0f5292d753c88e902dd78fcfc3213457629e35535634a5feee50" Dec 03 00:29:24 crc kubenswrapper[3561]: I1203 00:29:24.140287 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 03 00:29:26 crc kubenswrapper[3561]: I1203 00:29:26.625991 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" (UID: "afd3188f-2fcd-4ca9-8557-cd2b2134b3e9"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:26 crc kubenswrapper[3561]: I1203 00:29:26.693725 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/afd3188f-2fcd-4ca9-8557-cd2b2134b3e9-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.692962 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.693086 3561 topology_manager.go:215] "Topology Admit Handler" podUID="618b217e-e2a1-4718-9aef-ddbcadf90a7d" podNamespace="service-telemetry" podName="sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: E1203 00:29:28.693230 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" containerName="manage-dockerfile" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.693242 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" containerName="manage-dockerfile" Dec 03 00:29:28 crc kubenswrapper[3561]: E1203 00:29:28.693258 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" containerName="git-clone" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.693264 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" containerName="git-clone" Dec 03 00:29:28 crc kubenswrapper[3561]: E1203 00:29:28.693274 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" containerName="docker-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.693280 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" containerName="docker-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.693412 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd3188f-2fcd-4ca9-8557-cd2b2134b3e9" containerName="docker-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.694017 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.697229 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.697598 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-6qmd9" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.697783 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.698501 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.711898 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.717580 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/618b217e-e2a1-4718-9aef-ddbcadf90a7d-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.717636 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.717661 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/618b217e-e2a1-4718-9aef-ddbcadf90a7d-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.717707 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.718067 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8btsr\" (UniqueName: \"kubernetes.io/projected/618b217e-e2a1-4718-9aef-ddbcadf90a7d-kube-api-access-8btsr\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.718130 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.718356 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/618b217e-e2a1-4718-9aef-ddbcadf90a7d-builder-dockercfg-6qmd9-push\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.718422 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.718531 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.718688 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.719613 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/618b217e-e2a1-4718-9aef-ddbcadf90a7d-builder-dockercfg-6qmd9-pull\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.720206 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.821185 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/618b217e-e2a1-4718-9aef-ddbcadf90a7d-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.821924 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.822195 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/618b217e-e2a1-4718-9aef-ddbcadf90a7d-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.822350 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.822483 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8btsr\" (UniqueName: \"kubernetes.io/projected/618b217e-e2a1-4718-9aef-ddbcadf90a7d-kube-api-access-8btsr\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.822667 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.822821 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/618b217e-e2a1-4718-9aef-ddbcadf90a7d-builder-dockercfg-6qmd9-push\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.822953 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.823045 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.823203 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.823356 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.823642 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/618b217e-e2a1-4718-9aef-ddbcadf90a7d-builder-dockercfg-6qmd9-pull\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.823777 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.823906 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.821359 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/618b217e-e2a1-4718-9aef-ddbcadf90a7d-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.822407 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/618b217e-e2a1-4718-9aef-ddbcadf90a7d-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.824158 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.824221 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.824624 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.824864 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.829382 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.831556 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/618b217e-e2a1-4718-9aef-ddbcadf90a7d-builder-dockercfg-6qmd9-pull\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.834704 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/618b217e-e2a1-4718-9aef-ddbcadf90a7d-builder-dockercfg-6qmd9-push\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:28 crc kubenswrapper[3561]: I1203 00:29:28.853994 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8btsr\" (UniqueName: \"kubernetes.io/projected/618b217e-e2a1-4718-9aef-ddbcadf90a7d-kube-api-access-8btsr\") pod \"sg-bridge-1-build\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:29 crc kubenswrapper[3561]: I1203 00:29:29.018101 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:29 crc kubenswrapper[3561]: I1203 00:29:29.595167 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:29:30 crc kubenswrapper[3561]: I1203 00:29:30.187848 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"618b217e-e2a1-4718-9aef-ddbcadf90a7d","Type":"ContainerStarted","Data":"2140b67f0406345715c0c718b911c72a3558ff9f671cfea1a9276df2d895d87b"} Dec 03 00:29:30 crc kubenswrapper[3561]: I1203 00:29:30.188209 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"618b217e-e2a1-4718-9aef-ddbcadf90a7d","Type":"ContainerStarted","Data":"ffc72ffa91a5c8e2a8ec645a9f5e317179492faa544b954d67b68fd9f4d58063"} Dec 03 00:29:31 crc kubenswrapper[3561]: I1203 00:29:31.198403 3561 generic.go:334] "Generic (PLEG): container finished" podID="618b217e-e2a1-4718-9aef-ddbcadf90a7d" containerID="2140b67f0406345715c0c718b911c72a3558ff9f671cfea1a9276df2d895d87b" exitCode=0 Dec 03 00:29:31 crc kubenswrapper[3561]: I1203 00:29:31.198517 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"618b217e-e2a1-4718-9aef-ddbcadf90a7d","Type":"ContainerDied","Data":"2140b67f0406345715c0c718b911c72a3558ff9f671cfea1a9276df2d895d87b"} Dec 03 00:29:32 crc kubenswrapper[3561]: I1203 00:29:32.209388 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"618b217e-e2a1-4718-9aef-ddbcadf90a7d","Type":"ContainerStarted","Data":"c6b9c9969aea2e664b07aeab0d9d329bfd0894252d16df5843ca6e0ce25a8093"} Dec 03 00:29:32 crc kubenswrapper[3561]: I1203 00:29:32.253072 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=4.252976297 podStartE2EDuration="4.252976297s" podCreationTimestamp="2025-12-03 00:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:29:32.244832643 +0000 UTC m=+1371.025266961" watchObservedRunningTime="2025-12-03 00:29:32.252976297 +0000 UTC m=+1371.033410615" Dec 03 00:29:39 crc kubenswrapper[3561]: I1203 00:29:39.072142 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:29:39 crc kubenswrapper[3561]: I1203 00:29:39.072669 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="618b217e-e2a1-4718-9aef-ddbcadf90a7d" containerName="docker-build" containerID="cri-o://c6b9c9969aea2e664b07aeab0d9d329bfd0894252d16df5843ca6e0ce25a8093" gracePeriod=30 Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.259713 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_618b217e-e2a1-4718-9aef-ddbcadf90a7d/docker-build/0.log" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.260663 3561 generic.go:334] "Generic (PLEG): container finished" podID="618b217e-e2a1-4718-9aef-ddbcadf90a7d" containerID="c6b9c9969aea2e664b07aeab0d9d329bfd0894252d16df5843ca6e0ce25a8093" exitCode=1 Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.260706 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"618b217e-e2a1-4718-9aef-ddbcadf90a7d","Type":"ContainerDied","Data":"c6b9c9969aea2e664b07aeab0d9d329bfd0894252d16df5843ca6e0ce25a8093"} Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.618266 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_618b217e-e2a1-4718-9aef-ddbcadf90a7d/docker-build/0.log" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.618825 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.732228 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.732606 3561 topology_manager.go:215] "Topology Admit Handler" podUID="446ecc86-e366-4b5b-93e6-e2e2960b0bcc" podNamespace="service-telemetry" podName="sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: E1203 00:29:40.732749 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="618b217e-e2a1-4718-9aef-ddbcadf90a7d" containerName="docker-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.732760 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="618b217e-e2a1-4718-9aef-ddbcadf90a7d" containerName="docker-build" Dec 03 00:29:40 crc kubenswrapper[3561]: E1203 00:29:40.732775 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="618b217e-e2a1-4718-9aef-ddbcadf90a7d" containerName="manage-dockerfile" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.732781 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="618b217e-e2a1-4718-9aef-ddbcadf90a7d" containerName="manage-dockerfile" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.732934 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="618b217e-e2a1-4718-9aef-ddbcadf90a7d" containerName="docker-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.733742 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.739316 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.740133 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.741182 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.744388 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-system-configs\") pod \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.744474 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/618b217e-e2a1-4718-9aef-ddbcadf90a7d-buildcachedir\") pod \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.744521 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-container-storage-root\") pod \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.744583 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/618b217e-e2a1-4718-9aef-ddbcadf90a7d-builder-dockercfg-6qmd9-pull\") pod \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.744612 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-container-storage-run\") pod \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.744644 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-blob-cache\") pod \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.744684 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/618b217e-e2a1-4718-9aef-ddbcadf90a7d-builder-dockercfg-6qmd9-push\") pod \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.744723 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8btsr\" (UniqueName: \"kubernetes.io/projected/618b217e-e2a1-4718-9aef-ddbcadf90a7d-kube-api-access-8btsr\") pod \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.744752 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-proxy-ca-bundles\") pod \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.744779 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-ca-bundles\") pod \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.744813 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/618b217e-e2a1-4718-9aef-ddbcadf90a7d-node-pullsecrets\") pod \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.744845 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-buildworkdir\") pod \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\" (UID: \"618b217e-e2a1-4718-9aef-ddbcadf90a7d\") " Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.746359 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/618b217e-e2a1-4718-9aef-ddbcadf90a7d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "618b217e-e2a1-4718-9aef-ddbcadf90a7d" (UID: "618b217e-e2a1-4718-9aef-ddbcadf90a7d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.747127 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "618b217e-e2a1-4718-9aef-ddbcadf90a7d" (UID: "618b217e-e2a1-4718-9aef-ddbcadf90a7d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.747222 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "618b217e-e2a1-4718-9aef-ddbcadf90a7d" (UID: "618b217e-e2a1-4718-9aef-ddbcadf90a7d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.749179 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "618b217e-e2a1-4718-9aef-ddbcadf90a7d" (UID: "618b217e-e2a1-4718-9aef-ddbcadf90a7d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.749249 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/618b217e-e2a1-4718-9aef-ddbcadf90a7d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "618b217e-e2a1-4718-9aef-ddbcadf90a7d" (UID: "618b217e-e2a1-4718-9aef-ddbcadf90a7d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.749769 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "618b217e-e2a1-4718-9aef-ddbcadf90a7d" (UID: "618b217e-e2a1-4718-9aef-ddbcadf90a7d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.750857 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "618b217e-e2a1-4718-9aef-ddbcadf90a7d" (UID: "618b217e-e2a1-4718-9aef-ddbcadf90a7d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.755601 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/618b217e-e2a1-4718-9aef-ddbcadf90a7d-kube-api-access-8btsr" (OuterVolumeSpecName: "kube-api-access-8btsr") pod "618b217e-e2a1-4718-9aef-ddbcadf90a7d" (UID: "618b217e-e2a1-4718-9aef-ddbcadf90a7d"). InnerVolumeSpecName "kube-api-access-8btsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.759027 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/618b217e-e2a1-4718-9aef-ddbcadf90a7d-builder-dockercfg-6qmd9-push" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-push") pod "618b217e-e2a1-4718-9aef-ddbcadf90a7d" (UID: "618b217e-e2a1-4718-9aef-ddbcadf90a7d"). InnerVolumeSpecName "builder-dockercfg-6qmd9-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.768001 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.769591 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/618b217e-e2a1-4718-9aef-ddbcadf90a7d-builder-dockercfg-6qmd9-pull" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-pull") pod "618b217e-e2a1-4718-9aef-ddbcadf90a7d" (UID: "618b217e-e2a1-4718-9aef-ddbcadf90a7d"). InnerVolumeSpecName "builder-dockercfg-6qmd9-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.826984 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "618b217e-e2a1-4718-9aef-ddbcadf90a7d" (UID: "618b217e-e2a1-4718-9aef-ddbcadf90a7d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.845819 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.845888 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846039 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-builder-dockercfg-6qmd9-pull\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846093 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846157 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846190 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846217 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846259 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846296 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tspxg\" (UniqueName: \"kubernetes.io/projected/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-kube-api-access-tspxg\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846337 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-builder-dockercfg-6qmd9-push\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846364 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846385 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846464 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8btsr\" (UniqueName: \"kubernetes.io/projected/618b217e-e2a1-4718-9aef-ddbcadf90a7d-kube-api-access-8btsr\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846476 3561 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846487 3561 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/618b217e-e2a1-4718-9aef-ddbcadf90a7d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846497 3561 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846508 3561 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846517 3561 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846527 3561 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/618b217e-e2a1-4718-9aef-ddbcadf90a7d-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846553 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/618b217e-e2a1-4718-9aef-ddbcadf90a7d-builder-dockercfg-6qmd9-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846567 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846577 3561 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.846587 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/618b217e-e2a1-4718-9aef-ddbcadf90a7d-builder-dockercfg-6qmd9-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.947744 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-builder-dockercfg-6qmd9-pull\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.947861 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.947929 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.947977 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.948031 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.948104 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.948169 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tspxg\" (UniqueName: \"kubernetes.io/projected/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-kube-api-access-tspxg\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.948230 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-builder-dockercfg-6qmd9-push\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.948309 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.948353 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.948426 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.948477 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.948806 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.949195 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.949217 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.949829 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.949885 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.950735 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.953651 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.953749 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.954289 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.956412 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-builder-dockercfg-6qmd9-pull\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.957656 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-builder-dockercfg-6qmd9-push\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:40 crc kubenswrapper[3561]: I1203 00:29:40.972885 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tspxg\" (UniqueName: \"kubernetes.io/projected/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-kube-api-access-tspxg\") pod \"sg-bridge-2-build\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:41 crc kubenswrapper[3561]: I1203 00:29:41.052190 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:41 crc kubenswrapper[3561]: I1203 00:29:41.268174 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "618b217e-e2a1-4718-9aef-ddbcadf90a7d" (UID: "618b217e-e2a1-4718-9aef-ddbcadf90a7d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:41 crc kubenswrapper[3561]: I1203 00:29:41.268813 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_618b217e-e2a1-4718-9aef-ddbcadf90a7d/docker-build/0.log" Dec 03 00:29:41 crc kubenswrapper[3561]: I1203 00:29:41.269309 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"618b217e-e2a1-4718-9aef-ddbcadf90a7d","Type":"ContainerDied","Data":"ffc72ffa91a5c8e2a8ec645a9f5e317179492faa544b954d67b68fd9f4d58063"} Dec 03 00:29:41 crc kubenswrapper[3561]: I1203 00:29:41.269399 3561 scope.go:117] "RemoveContainer" containerID="c6b9c9969aea2e664b07aeab0d9d329bfd0894252d16df5843ca6e0ce25a8093" Dec 03 00:29:41 crc kubenswrapper[3561]: I1203 00:29:41.269627 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:41 crc kubenswrapper[3561]: I1203 00:29:41.320737 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:29:41 crc kubenswrapper[3561]: I1203 00:29:41.323129 3561 scope.go:117] "RemoveContainer" containerID="2140b67f0406345715c0c718b911c72a3558ff9f671cfea1a9276df2d895d87b" Dec 03 00:29:41 crc kubenswrapper[3561]: I1203 00:29:41.326434 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:29:41 crc kubenswrapper[3561]: I1203 00:29:41.330402 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Dec 03 00:29:41 crc kubenswrapper[3561]: W1203 00:29:41.332163 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod446ecc86_e366_4b5b_93e6_e2e2960b0bcc.slice/crio-f187c099143d20b0da4694a7890b48053d5452014a4522239a57380e14b4d2d1 WatchSource:0}: Error finding container f187c099143d20b0da4694a7890b48053d5452014a4522239a57380e14b4d2d1: Status 404 returned error can't find the container with id f187c099143d20b0da4694a7890b48053d5452014a4522239a57380e14b4d2d1 Dec 03 00:29:41 crc kubenswrapper[3561]: I1203 00:29:41.354790 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/618b217e-e2a1-4718-9aef-ddbcadf90a7d-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:41 crc kubenswrapper[3561]: I1203 00:29:41.578600 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:29:41 crc kubenswrapper[3561]: I1203 00:29:41.578671 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:29:41 crc kubenswrapper[3561]: I1203 00:29:41.578720 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:29:41 crc kubenswrapper[3561]: I1203 00:29:41.578738 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:29:41 crc kubenswrapper[3561]: I1203 00:29:41.578792 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:29:41 crc kubenswrapper[3561]: I1203 00:29:41.676880 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="618b217e-e2a1-4718-9aef-ddbcadf90a7d" path="/var/lib/kubelet/pods/618b217e-e2a1-4718-9aef-ddbcadf90a7d/volumes" Dec 03 00:29:42 crc kubenswrapper[3561]: I1203 00:29:42.280194 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"446ecc86-e366-4b5b-93e6-e2e2960b0bcc","Type":"ContainerStarted","Data":"9c6a55a15e23d2b52c5df8156dadf8835894efac9acb927ebe45dcb59cf6195c"} Dec 03 00:29:42 crc kubenswrapper[3561]: I1203 00:29:42.280614 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"446ecc86-e366-4b5b-93e6-e2e2960b0bcc","Type":"ContainerStarted","Data":"f187c099143d20b0da4694a7890b48053d5452014a4522239a57380e14b4d2d1"} Dec 03 00:29:43 crc kubenswrapper[3561]: I1203 00:29:43.289697 3561 generic.go:334] "Generic (PLEG): container finished" podID="446ecc86-e366-4b5b-93e6-e2e2960b0bcc" containerID="9c6a55a15e23d2b52c5df8156dadf8835894efac9acb927ebe45dcb59cf6195c" exitCode=0 Dec 03 00:29:43 crc kubenswrapper[3561]: I1203 00:29:43.289734 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"446ecc86-e366-4b5b-93e6-e2e2960b0bcc","Type":"ContainerDied","Data":"9c6a55a15e23d2b52c5df8156dadf8835894efac9acb927ebe45dcb59cf6195c"} Dec 03 00:29:44 crc kubenswrapper[3561]: I1203 00:29:44.298201 3561 generic.go:334] "Generic (PLEG): container finished" podID="446ecc86-e366-4b5b-93e6-e2e2960b0bcc" containerID="aa530475163743b3a7079d53a9f770bd6e7c8b76c6f26cd56277f69f80083304" exitCode=0 Dec 03 00:29:44 crc kubenswrapper[3561]: I1203 00:29:44.298266 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"446ecc86-e366-4b5b-93e6-e2e2960b0bcc","Type":"ContainerDied","Data":"aa530475163743b3a7079d53a9f770bd6e7c8b76c6f26cd56277f69f80083304"} Dec 03 00:29:44 crc kubenswrapper[3561]: I1203 00:29:44.373476 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_446ecc86-e366-4b5b-93e6-e2e2960b0bcc/manage-dockerfile/0.log" Dec 03 00:29:45 crc kubenswrapper[3561]: I1203 00:29:45.311668 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"446ecc86-e366-4b5b-93e6-e2e2960b0bcc","Type":"ContainerStarted","Data":"a5cd636e6181cd1cdbed4c840798a0099330076060b1cba909d67ccea5697e3b"} Dec 03 00:29:45 crc kubenswrapper[3561]: I1203 00:29:45.375507 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.375430455 podStartE2EDuration="5.375430455s" podCreationTimestamp="2025-12-03 00:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:29:45.366171325 +0000 UTC m=+1384.146605623" watchObservedRunningTime="2025-12-03 00:29:45.375430455 +0000 UTC m=+1384.155864743" Dec 03 00:29:57 crc kubenswrapper[3561]: I1203 00:29:57.623313 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:29:57 crc kubenswrapper[3561]: I1203 00:29:57.624829 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:30:00 crc kubenswrapper[3561]: I1203 00:30:00.179456 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5"] Dec 03 00:30:00 crc kubenswrapper[3561]: I1203 00:30:00.179866 3561 topology_manager.go:215] "Topology Admit Handler" podUID="927f93bd-969a-488d-83fb-4a50cd9c5c11" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29412030-2qmr5" Dec 03 00:30:00 crc kubenswrapper[3561]: I1203 00:30:00.180687 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" Dec 03 00:30:00 crc kubenswrapper[3561]: I1203 00:30:00.184651 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 00:30:00 crc kubenswrapper[3561]: I1203 00:30:00.184798 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Dec 03 00:30:00 crc kubenswrapper[3561]: I1203 00:30:00.195628 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5"] Dec 03 00:30:00 crc kubenswrapper[3561]: I1203 00:30:00.223843 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927f93bd-969a-488d-83fb-4a50cd9c5c11-secret-volume\") pod \"collect-profiles-29412030-2qmr5\" (UID: \"927f93bd-969a-488d-83fb-4a50cd9c5c11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" Dec 03 00:30:00 crc kubenswrapper[3561]: I1203 00:30:00.223972 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927f93bd-969a-488d-83fb-4a50cd9c5c11-config-volume\") pod \"collect-profiles-29412030-2qmr5\" (UID: \"927f93bd-969a-488d-83fb-4a50cd9c5c11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" Dec 03 00:30:00 crc kubenswrapper[3561]: I1203 00:30:00.224041 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fnd7\" (UniqueName: \"kubernetes.io/projected/927f93bd-969a-488d-83fb-4a50cd9c5c11-kube-api-access-4fnd7\") pod \"collect-profiles-29412030-2qmr5\" (UID: \"927f93bd-969a-488d-83fb-4a50cd9c5c11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" Dec 03 00:30:00 crc kubenswrapper[3561]: I1203 00:30:00.325407 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927f93bd-969a-488d-83fb-4a50cd9c5c11-secret-volume\") pod \"collect-profiles-29412030-2qmr5\" (UID: \"927f93bd-969a-488d-83fb-4a50cd9c5c11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" Dec 03 00:30:00 crc kubenswrapper[3561]: I1203 00:30:00.325470 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927f93bd-969a-488d-83fb-4a50cd9c5c11-config-volume\") pod \"collect-profiles-29412030-2qmr5\" (UID: \"927f93bd-969a-488d-83fb-4a50cd9c5c11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" Dec 03 00:30:00 crc kubenswrapper[3561]: I1203 00:30:00.325520 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4fnd7\" (UniqueName: \"kubernetes.io/projected/927f93bd-969a-488d-83fb-4a50cd9c5c11-kube-api-access-4fnd7\") pod \"collect-profiles-29412030-2qmr5\" (UID: \"927f93bd-969a-488d-83fb-4a50cd9c5c11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" Dec 03 00:30:00 crc kubenswrapper[3561]: I1203 00:30:00.326595 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927f93bd-969a-488d-83fb-4a50cd9c5c11-config-volume\") pod \"collect-profiles-29412030-2qmr5\" (UID: \"927f93bd-969a-488d-83fb-4a50cd9c5c11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" Dec 03 00:30:00 crc kubenswrapper[3561]: I1203 00:30:00.333487 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927f93bd-969a-488d-83fb-4a50cd9c5c11-secret-volume\") pod \"collect-profiles-29412030-2qmr5\" (UID: \"927f93bd-969a-488d-83fb-4a50cd9c5c11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" Dec 03 00:30:00 crc kubenswrapper[3561]: I1203 00:30:00.350716 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fnd7\" (UniqueName: \"kubernetes.io/projected/927f93bd-969a-488d-83fb-4a50cd9c5c11-kube-api-access-4fnd7\") pod \"collect-profiles-29412030-2qmr5\" (UID: \"927f93bd-969a-488d-83fb-4a50cd9c5c11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" Dec 03 00:30:00 crc kubenswrapper[3561]: I1203 00:30:00.503516 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" Dec 03 00:30:00 crc kubenswrapper[3561]: I1203 00:30:00.744783 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5"] Dec 03 00:30:01 crc kubenswrapper[3561]: I1203 00:30:01.450584 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" event={"ID":"927f93bd-969a-488d-83fb-4a50cd9c5c11","Type":"ContainerStarted","Data":"7d92687c64a980847d15bb971582ea07ef9e309e14b6451e8397b4428203fec0"} Dec 03 00:30:02 crc kubenswrapper[3561]: I1203 00:30:02.458070 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" event={"ID":"927f93bd-969a-488d-83fb-4a50cd9c5c11","Type":"ContainerStarted","Data":"a7e24995b6c855128ad76d107b921fe17c84c91a657db71905fab934fa59a265"} Dec 03 00:30:02 crc kubenswrapper[3561]: I1203 00:30:02.481936 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" podStartSLOduration=2.481866416 podStartE2EDuration="2.481866416s" podCreationTimestamp="2025-12-03 00:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:30:02.476119396 +0000 UTC m=+1401.256553654" watchObservedRunningTime="2025-12-03 00:30:02.481866416 +0000 UTC m=+1401.262300734" Dec 03 00:30:03 crc kubenswrapper[3561]: I1203 00:30:03.463945 3561 generic.go:334] "Generic (PLEG): container finished" podID="927f93bd-969a-488d-83fb-4a50cd9c5c11" containerID="a7e24995b6c855128ad76d107b921fe17c84c91a657db71905fab934fa59a265" exitCode=0 Dec 03 00:30:03 crc kubenswrapper[3561]: I1203 00:30:03.463988 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" event={"ID":"927f93bd-969a-488d-83fb-4a50cd9c5c11","Type":"ContainerDied","Data":"a7e24995b6c855128ad76d107b921fe17c84c91a657db71905fab934fa59a265"} Dec 03 00:30:04 crc kubenswrapper[3561]: I1203 00:30:04.737599 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" Dec 03 00:30:04 crc kubenswrapper[3561]: I1203 00:30:04.792811 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927f93bd-969a-488d-83fb-4a50cd9c5c11-config-volume\") pod \"927f93bd-969a-488d-83fb-4a50cd9c5c11\" (UID: \"927f93bd-969a-488d-83fb-4a50cd9c5c11\") " Dec 03 00:30:04 crc kubenswrapper[3561]: I1203 00:30:04.792998 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fnd7\" (UniqueName: \"kubernetes.io/projected/927f93bd-969a-488d-83fb-4a50cd9c5c11-kube-api-access-4fnd7\") pod \"927f93bd-969a-488d-83fb-4a50cd9c5c11\" (UID: \"927f93bd-969a-488d-83fb-4a50cd9c5c11\") " Dec 03 00:30:04 crc kubenswrapper[3561]: I1203 00:30:04.793066 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927f93bd-969a-488d-83fb-4a50cd9c5c11-secret-volume\") pod \"927f93bd-969a-488d-83fb-4a50cd9c5c11\" (UID: \"927f93bd-969a-488d-83fb-4a50cd9c5c11\") " Dec 03 00:30:04 crc kubenswrapper[3561]: I1203 00:30:04.793304 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/927f93bd-969a-488d-83fb-4a50cd9c5c11-config-volume" (OuterVolumeSpecName: "config-volume") pod "927f93bd-969a-488d-83fb-4a50cd9c5c11" (UID: "927f93bd-969a-488d-83fb-4a50cd9c5c11"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:30:04 crc kubenswrapper[3561]: I1203 00:30:04.798561 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927f93bd-969a-488d-83fb-4a50cd9c5c11-kube-api-access-4fnd7" (OuterVolumeSpecName: "kube-api-access-4fnd7") pod "927f93bd-969a-488d-83fb-4a50cd9c5c11" (UID: "927f93bd-969a-488d-83fb-4a50cd9c5c11"). InnerVolumeSpecName "kube-api-access-4fnd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:30:04 crc kubenswrapper[3561]: I1203 00:30:04.808767 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927f93bd-969a-488d-83fb-4a50cd9c5c11-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "927f93bd-969a-488d-83fb-4a50cd9c5c11" (UID: "927f93bd-969a-488d-83fb-4a50cd9c5c11"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:30:04 crc kubenswrapper[3561]: I1203 00:30:04.893983 3561 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927f93bd-969a-488d-83fb-4a50cd9c5c11-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:04 crc kubenswrapper[3561]: I1203 00:30:04.894278 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4fnd7\" (UniqueName: \"kubernetes.io/projected/927f93bd-969a-488d-83fb-4a50cd9c5c11-kube-api-access-4fnd7\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:04 crc kubenswrapper[3561]: I1203 00:30:04.894351 3561 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927f93bd-969a-488d-83fb-4a50cd9c5c11-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:05 crc kubenswrapper[3561]: I1203 00:30:05.475002 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" event={"ID":"927f93bd-969a-488d-83fb-4a50cd9c5c11","Type":"ContainerDied","Data":"7d92687c64a980847d15bb971582ea07ef9e309e14b6451e8397b4428203fec0"} Dec 03 00:30:05 crc kubenswrapper[3561]: I1203 00:30:05.475052 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-2qmr5" Dec 03 00:30:05 crc kubenswrapper[3561]: I1203 00:30:05.475060 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d92687c64a980847d15bb971582ea07ef9e309e14b6451e8397b4428203fec0" Dec 03 00:30:05 crc kubenswrapper[3561]: I1203 00:30:05.549936 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd"] Dec 03 00:30:05 crc kubenswrapper[3561]: I1203 00:30:05.555075 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd"] Dec 03 00:30:05 crc kubenswrapper[3561]: I1203 00:30:05.670882 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad171c4b-8408-4370-8e86-502999788ddb" path="/var/lib/kubelet/pods/ad171c4b-8408-4370-8e86-502999788ddb/volumes" Dec 03 00:30:22 crc kubenswrapper[3561]: I1203 00:30:22.103264 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l7mjx"] Dec 03 00:30:22 crc kubenswrapper[3561]: I1203 00:30:22.104080 3561 topology_manager.go:215] "Topology Admit Handler" podUID="121a2300-661b-4388-b909-010c943f3921" podNamespace="openshift-marketplace" podName="redhat-operators-l7mjx" Dec 03 00:30:22 crc kubenswrapper[3561]: E1203 00:30:22.104324 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="927f93bd-969a-488d-83fb-4a50cd9c5c11" containerName="collect-profiles" Dec 03 00:30:22 crc kubenswrapper[3561]: I1203 00:30:22.104341 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="927f93bd-969a-488d-83fb-4a50cd9c5c11" containerName="collect-profiles" Dec 03 00:30:22 crc kubenswrapper[3561]: I1203 00:30:22.104562 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="927f93bd-969a-488d-83fb-4a50cd9c5c11" containerName="collect-profiles" Dec 03 00:30:22 crc kubenswrapper[3561]: I1203 00:30:22.105906 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7mjx" Dec 03 00:30:22 crc kubenswrapper[3561]: I1203 00:30:22.119627 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7mjx"] Dec 03 00:30:22 crc kubenswrapper[3561]: I1203 00:30:22.155298 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/121a2300-661b-4388-b909-010c943f3921-utilities\") pod \"redhat-operators-l7mjx\" (UID: \"121a2300-661b-4388-b909-010c943f3921\") " pod="openshift-marketplace/redhat-operators-l7mjx" Dec 03 00:30:22 crc kubenswrapper[3561]: I1203 00:30:22.155347 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/121a2300-661b-4388-b909-010c943f3921-catalog-content\") pod \"redhat-operators-l7mjx\" (UID: \"121a2300-661b-4388-b909-010c943f3921\") " pod="openshift-marketplace/redhat-operators-l7mjx" Dec 03 00:30:22 crc kubenswrapper[3561]: I1203 00:30:22.155473 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggv7f\" (UniqueName: \"kubernetes.io/projected/121a2300-661b-4388-b909-010c943f3921-kube-api-access-ggv7f\") pod \"redhat-operators-l7mjx\" (UID: \"121a2300-661b-4388-b909-010c943f3921\") " pod="openshift-marketplace/redhat-operators-l7mjx" Dec 03 00:30:22 crc kubenswrapper[3561]: I1203 00:30:22.256617 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/121a2300-661b-4388-b909-010c943f3921-utilities\") pod \"redhat-operators-l7mjx\" (UID: \"121a2300-661b-4388-b909-010c943f3921\") " pod="openshift-marketplace/redhat-operators-l7mjx" Dec 03 00:30:22 crc kubenswrapper[3561]: I1203 00:30:22.256664 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/121a2300-661b-4388-b909-010c943f3921-catalog-content\") pod \"redhat-operators-l7mjx\" (UID: \"121a2300-661b-4388-b909-010c943f3921\") " pod="openshift-marketplace/redhat-operators-l7mjx" Dec 03 00:30:22 crc kubenswrapper[3561]: I1203 00:30:22.256732 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ggv7f\" (UniqueName: \"kubernetes.io/projected/121a2300-661b-4388-b909-010c943f3921-kube-api-access-ggv7f\") pod \"redhat-operators-l7mjx\" (UID: \"121a2300-661b-4388-b909-010c943f3921\") " pod="openshift-marketplace/redhat-operators-l7mjx" Dec 03 00:30:22 crc kubenswrapper[3561]: I1203 00:30:22.257383 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/121a2300-661b-4388-b909-010c943f3921-utilities\") pod \"redhat-operators-l7mjx\" (UID: \"121a2300-661b-4388-b909-010c943f3921\") " pod="openshift-marketplace/redhat-operators-l7mjx" Dec 03 00:30:22 crc kubenswrapper[3561]: I1203 00:30:22.257632 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/121a2300-661b-4388-b909-010c943f3921-catalog-content\") pod \"redhat-operators-l7mjx\" (UID: \"121a2300-661b-4388-b909-010c943f3921\") " pod="openshift-marketplace/redhat-operators-l7mjx" Dec 03 00:30:22 crc kubenswrapper[3561]: I1203 00:30:22.277169 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggv7f\" (UniqueName: \"kubernetes.io/projected/121a2300-661b-4388-b909-010c943f3921-kube-api-access-ggv7f\") pod \"redhat-operators-l7mjx\" (UID: \"121a2300-661b-4388-b909-010c943f3921\") " pod="openshift-marketplace/redhat-operators-l7mjx" Dec 03 00:30:22 crc kubenswrapper[3561]: I1203 00:30:22.427984 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7mjx" Dec 03 00:30:22 crc kubenswrapper[3561]: I1203 00:30:22.630713 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7mjx"] Dec 03 00:30:22 crc kubenswrapper[3561]: W1203 00:30:22.638673 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod121a2300_661b_4388_b909_010c943f3921.slice/crio-8850767451c033525e12294c6cb2cc571e485c36a511a76de4371903d1547ab7 WatchSource:0}: Error finding container 8850767451c033525e12294c6cb2cc571e485c36a511a76de4371903d1547ab7: Status 404 returned error can't find the container with id 8850767451c033525e12294c6cb2cc571e485c36a511a76de4371903d1547ab7 Dec 03 00:30:23 crc kubenswrapper[3561]: I1203 00:30:23.578041 3561 generic.go:334] "Generic (PLEG): container finished" podID="121a2300-661b-4388-b909-010c943f3921" containerID="3fb68b05418d31552dc7062141080ea0fdebc974f49e61118de80b53f58af0bc" exitCode=0 Dec 03 00:30:23 crc kubenswrapper[3561]: I1203 00:30:23.578081 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7mjx" event={"ID":"121a2300-661b-4388-b909-010c943f3921","Type":"ContainerDied","Data":"3fb68b05418d31552dc7062141080ea0fdebc974f49e61118de80b53f58af0bc"} Dec 03 00:30:23 crc kubenswrapper[3561]: I1203 00:30:23.578102 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7mjx" event={"ID":"121a2300-661b-4388-b909-010c943f3921","Type":"ContainerStarted","Data":"8850767451c033525e12294c6cb2cc571e485c36a511a76de4371903d1547ab7"} Dec 03 00:30:23 crc kubenswrapper[3561]: I1203 00:30:23.581426 3561 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 00:30:24 crc kubenswrapper[3561]: I1203 00:30:24.587001 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7mjx" event={"ID":"121a2300-661b-4388-b909-010c943f3921","Type":"ContainerStarted","Data":"2aa96865122dcdff52c29bf15fbf7b40754b0b4dde6b3cd07b0481b2da656a2f"} Dec 03 00:30:27 crc kubenswrapper[3561]: I1203 00:30:27.637077 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:30:27 crc kubenswrapper[3561]: I1203 00:30:27.638057 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:30:37 crc kubenswrapper[3561]: I1203 00:30:37.656670 3561 generic.go:334] "Generic (PLEG): container finished" podID="121a2300-661b-4388-b909-010c943f3921" containerID="2aa96865122dcdff52c29bf15fbf7b40754b0b4dde6b3cd07b0481b2da656a2f" exitCode=0 Dec 03 00:30:37 crc kubenswrapper[3561]: I1203 00:30:37.656810 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7mjx" event={"ID":"121a2300-661b-4388-b909-010c943f3921","Type":"ContainerDied","Data":"2aa96865122dcdff52c29bf15fbf7b40754b0b4dde6b3cd07b0481b2da656a2f"} Dec 03 00:30:41 crc kubenswrapper[3561]: I1203 00:30:41.579991 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:30:41 crc kubenswrapper[3561]: I1203 00:30:41.580597 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:30:41 crc kubenswrapper[3561]: I1203 00:30:41.580636 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:30:41 crc kubenswrapper[3561]: I1203 00:30:41.580682 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:30:41 crc kubenswrapper[3561]: I1203 00:30:41.580707 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:30:41 crc kubenswrapper[3561]: I1203 00:30:41.967739 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7mjx" event={"ID":"121a2300-661b-4388-b909-010c943f3921","Type":"ContainerStarted","Data":"08bf911ea1b3073c1ab111bca4c0b8f287b20494452aeb89c060f05e469ccf65"} Dec 03 00:30:41 crc kubenswrapper[3561]: I1203 00:30:41.985930 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l7mjx" podStartSLOduration=5.629764016 podStartE2EDuration="19.985880443s" podCreationTimestamp="2025-12-03 00:30:22 +0000 UTC" firstStartedPulling="2025-12-03 00:30:23.579501772 +0000 UTC m=+1422.359936040" lastFinishedPulling="2025-12-03 00:30:37.935618209 +0000 UTC m=+1436.716052467" observedRunningTime="2025-12-03 00:30:41.984987965 +0000 UTC m=+1440.765422243" watchObservedRunningTime="2025-12-03 00:30:41.985880443 +0000 UTC m=+1440.766314711" Dec 03 00:30:42 crc kubenswrapper[3561]: I1203 00:30:42.428191 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l7mjx" Dec 03 00:30:42 crc kubenswrapper[3561]: I1203 00:30:42.428447 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l7mjx" Dec 03 00:30:43 crc kubenswrapper[3561]: I1203 00:30:43.532315 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l7mjx" podUID="121a2300-661b-4388-b909-010c943f3921" containerName="registry-server" probeResult="failure" output=< Dec 03 00:30:43 crc kubenswrapper[3561]: timeout: failed to connect service ":50051" within 1s Dec 03 00:30:43 crc kubenswrapper[3561]: > Dec 03 00:30:50 crc kubenswrapper[3561]: I1203 00:30:50.013683 3561 generic.go:334] "Generic (PLEG): container finished" podID="446ecc86-e366-4b5b-93e6-e2e2960b0bcc" containerID="a5cd636e6181cd1cdbed4c840798a0099330076060b1cba909d67ccea5697e3b" exitCode=0 Dec 03 00:30:50 crc kubenswrapper[3561]: I1203 00:30:50.013877 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"446ecc86-e366-4b5b-93e6-e2e2960b0bcc","Type":"ContainerDied","Data":"a5cd636e6181cd1cdbed4c840798a0099330076060b1cba909d67ccea5697e3b"} Dec 03 00:30:50 crc kubenswrapper[3561]: E1203 00:30:50.992376 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89\": container with ID starting with 67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89 not found: ID does not exist" containerID="67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89" Dec 03 00:30:50 crc kubenswrapper[3561]: I1203 00:30:50.992675 3561 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89" err="rpc error: code = NotFound desc = could not find container \"67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89\": container with ID starting with 67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89 not found: ID does not exist" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.311295 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.443524 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-buildworkdir\") pod \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.443626 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-buildcachedir\") pod \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.443649 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-system-configs\") pod \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.443670 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-builder-dockercfg-6qmd9-push\") pod \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.443691 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-builder-dockercfg-6qmd9-pull\") pod \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.443707 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-node-pullsecrets\") pod \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.443742 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-proxy-ca-bundles\") pod \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.443761 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-container-storage-run\") pod \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.443785 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-blob-cache\") pod \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.443805 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-ca-bundles\") pod \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.443834 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tspxg\" (UniqueName: \"kubernetes.io/projected/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-kube-api-access-tspxg\") pod \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.443863 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-container-storage-root\") pod \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\" (UID: \"446ecc86-e366-4b5b-93e6-e2e2960b0bcc\") " Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.444095 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "446ecc86-e366-4b5b-93e6-e2e2960b0bcc" (UID: "446ecc86-e366-4b5b-93e6-e2e2960b0bcc"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.444328 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "446ecc86-e366-4b5b-93e6-e2e2960b0bcc" (UID: "446ecc86-e366-4b5b-93e6-e2e2960b0bcc"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.444608 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "446ecc86-e366-4b5b-93e6-e2e2960b0bcc" (UID: "446ecc86-e366-4b5b-93e6-e2e2960b0bcc"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.444820 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "446ecc86-e366-4b5b-93e6-e2e2960b0bcc" (UID: "446ecc86-e366-4b5b-93e6-e2e2960b0bcc"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.444907 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "446ecc86-e366-4b5b-93e6-e2e2960b0bcc" (UID: "446ecc86-e366-4b5b-93e6-e2e2960b0bcc"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.445098 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "446ecc86-e366-4b5b-93e6-e2e2960b0bcc" (UID: "446ecc86-e366-4b5b-93e6-e2e2960b0bcc"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.445153 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "446ecc86-e366-4b5b-93e6-e2e2960b0bcc" (UID: "446ecc86-e366-4b5b-93e6-e2e2960b0bcc"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.449609 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-builder-dockercfg-6qmd9-push" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-push") pod "446ecc86-e366-4b5b-93e6-e2e2960b0bcc" (UID: "446ecc86-e366-4b5b-93e6-e2e2960b0bcc"). InnerVolumeSpecName "builder-dockercfg-6qmd9-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.450049 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-builder-dockercfg-6qmd9-pull" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-pull") pod "446ecc86-e366-4b5b-93e6-e2e2960b0bcc" (UID: "446ecc86-e366-4b5b-93e6-e2e2960b0bcc"). InnerVolumeSpecName "builder-dockercfg-6qmd9-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.450095 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-kube-api-access-tspxg" (OuterVolumeSpecName: "kube-api-access-tspxg") pod "446ecc86-e366-4b5b-93e6-e2e2960b0bcc" (UID: "446ecc86-e366-4b5b-93e6-e2e2960b0bcc"). InnerVolumeSpecName "kube-api-access-tspxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.544952 3561 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.545303 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-builder-dockercfg-6qmd9-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.545352 3561 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.545367 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-builder-dockercfg-6qmd9-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.545382 3561 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.545395 3561 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.545410 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.545425 3561 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.545437 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tspxg\" (UniqueName: \"kubernetes.io/projected/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-kube-api-access-tspxg\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.545453 3561 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.552804 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "446ecc86-e366-4b5b-93e6-e2e2960b0bcc" (UID: "446ecc86-e366-4b5b-93e6-e2e2960b0bcc"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:30:51 crc kubenswrapper[3561]: I1203 00:30:51.646263 3561 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:52 crc kubenswrapper[3561]: I1203 00:30:52.030579 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"446ecc86-e366-4b5b-93e6-e2e2960b0bcc","Type":"ContainerDied","Data":"f187c099143d20b0da4694a7890b48053d5452014a4522239a57380e14b4d2d1"} Dec 03 00:30:52 crc kubenswrapper[3561]: I1203 00:30:52.030618 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f187c099143d20b0da4694a7890b48053d5452014a4522239a57380e14b4d2d1" Dec 03 00:30:52 crc kubenswrapper[3561]: I1203 00:30:52.030708 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 03 00:30:52 crc kubenswrapper[3561]: I1203 00:30:52.144609 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "446ecc86-e366-4b5b-93e6-e2e2960b0bcc" (UID: "446ecc86-e366-4b5b-93e6-e2e2960b0bcc"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:30:52 crc kubenswrapper[3561]: I1203 00:30:52.151395 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/446ecc86-e366-4b5b-93e6-e2e2960b0bcc-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:52 crc kubenswrapper[3561]: I1203 00:30:52.524106 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l7mjx" Dec 03 00:30:52 crc kubenswrapper[3561]: I1203 00:30:52.619822 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l7mjx" Dec 03 00:30:52 crc kubenswrapper[3561]: I1203 00:30:52.660195 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7mjx"] Dec 03 00:30:54 crc kubenswrapper[3561]: I1203 00:30:54.042572 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l7mjx" podUID="121a2300-661b-4388-b909-010c943f3921" containerName="registry-server" containerID="cri-o://08bf911ea1b3073c1ab111bca4c0b8f287b20494452aeb89c060f05e469ccf65" gracePeriod=2 Dec 03 00:30:54 crc kubenswrapper[3561]: I1203 00:30:54.860297 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7mjx" Dec 03 00:30:54 crc kubenswrapper[3561]: I1203 00:30:54.986630 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/121a2300-661b-4388-b909-010c943f3921-catalog-content\") pod \"121a2300-661b-4388-b909-010c943f3921\" (UID: \"121a2300-661b-4388-b909-010c943f3921\") " Dec 03 00:30:54 crc kubenswrapper[3561]: I1203 00:30:54.986675 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggv7f\" (UniqueName: \"kubernetes.io/projected/121a2300-661b-4388-b909-010c943f3921-kube-api-access-ggv7f\") pod \"121a2300-661b-4388-b909-010c943f3921\" (UID: \"121a2300-661b-4388-b909-010c943f3921\") " Dec 03 00:30:54 crc kubenswrapper[3561]: I1203 00:30:54.986735 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/121a2300-661b-4388-b909-010c943f3921-utilities\") pod \"121a2300-661b-4388-b909-010c943f3921\" (UID: \"121a2300-661b-4388-b909-010c943f3921\") " Dec 03 00:30:54 crc kubenswrapper[3561]: I1203 00:30:54.987721 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/121a2300-661b-4388-b909-010c943f3921-utilities" (OuterVolumeSpecName: "utilities") pod "121a2300-661b-4388-b909-010c943f3921" (UID: "121a2300-661b-4388-b909-010c943f3921"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:30:54 crc kubenswrapper[3561]: I1203 00:30:54.992669 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/121a2300-661b-4388-b909-010c943f3921-kube-api-access-ggv7f" (OuterVolumeSpecName: "kube-api-access-ggv7f") pod "121a2300-661b-4388-b909-010c943f3921" (UID: "121a2300-661b-4388-b909-010c943f3921"). InnerVolumeSpecName "kube-api-access-ggv7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:30:55 crc kubenswrapper[3561]: I1203 00:30:55.051255 3561 generic.go:334] "Generic (PLEG): container finished" podID="121a2300-661b-4388-b909-010c943f3921" containerID="08bf911ea1b3073c1ab111bca4c0b8f287b20494452aeb89c060f05e469ccf65" exitCode=0 Dec 03 00:30:55 crc kubenswrapper[3561]: I1203 00:30:55.051299 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7mjx" event={"ID":"121a2300-661b-4388-b909-010c943f3921","Type":"ContainerDied","Data":"08bf911ea1b3073c1ab111bca4c0b8f287b20494452aeb89c060f05e469ccf65"} Dec 03 00:30:55 crc kubenswrapper[3561]: I1203 00:30:55.051322 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7mjx" event={"ID":"121a2300-661b-4388-b909-010c943f3921","Type":"ContainerDied","Data":"8850767451c033525e12294c6cb2cc571e485c36a511a76de4371903d1547ab7"} Dec 03 00:30:55 crc kubenswrapper[3561]: I1203 00:30:55.051344 3561 scope.go:117] "RemoveContainer" containerID="08bf911ea1b3073c1ab111bca4c0b8f287b20494452aeb89c060f05e469ccf65" Dec 03 00:30:55 crc kubenswrapper[3561]: I1203 00:30:55.051463 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7mjx" Dec 03 00:30:55 crc kubenswrapper[3561]: I1203 00:30:55.085171 3561 scope.go:117] "RemoveContainer" containerID="2aa96865122dcdff52c29bf15fbf7b40754b0b4dde6b3cd07b0481b2da656a2f" Dec 03 00:30:55 crc kubenswrapper[3561]: I1203 00:30:55.089078 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ggv7f\" (UniqueName: \"kubernetes.io/projected/121a2300-661b-4388-b909-010c943f3921-kube-api-access-ggv7f\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:55 crc kubenswrapper[3561]: I1203 00:30:55.089124 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/121a2300-661b-4388-b909-010c943f3921-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:55 crc kubenswrapper[3561]: I1203 00:30:55.139187 3561 scope.go:117] "RemoveContainer" containerID="3fb68b05418d31552dc7062141080ea0fdebc974f49e61118de80b53f58af0bc" Dec 03 00:30:55 crc kubenswrapper[3561]: I1203 00:30:55.161998 3561 scope.go:117] "RemoveContainer" containerID="08bf911ea1b3073c1ab111bca4c0b8f287b20494452aeb89c060f05e469ccf65" Dec 03 00:30:55 crc kubenswrapper[3561]: E1203 00:30:55.162528 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08bf911ea1b3073c1ab111bca4c0b8f287b20494452aeb89c060f05e469ccf65\": container with ID starting with 08bf911ea1b3073c1ab111bca4c0b8f287b20494452aeb89c060f05e469ccf65 not found: ID does not exist" containerID="08bf911ea1b3073c1ab111bca4c0b8f287b20494452aeb89c060f05e469ccf65" Dec 03 00:30:55 crc kubenswrapper[3561]: I1203 00:30:55.162635 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08bf911ea1b3073c1ab111bca4c0b8f287b20494452aeb89c060f05e469ccf65"} err="failed to get container status \"08bf911ea1b3073c1ab111bca4c0b8f287b20494452aeb89c060f05e469ccf65\": rpc error: code = NotFound desc = could not find container \"08bf911ea1b3073c1ab111bca4c0b8f287b20494452aeb89c060f05e469ccf65\": container with ID starting with 08bf911ea1b3073c1ab111bca4c0b8f287b20494452aeb89c060f05e469ccf65 not found: ID does not exist" Dec 03 00:30:55 crc kubenswrapper[3561]: I1203 00:30:55.162648 3561 scope.go:117] "RemoveContainer" containerID="2aa96865122dcdff52c29bf15fbf7b40754b0b4dde6b3cd07b0481b2da656a2f" Dec 03 00:30:55 crc kubenswrapper[3561]: E1203 00:30:55.162946 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aa96865122dcdff52c29bf15fbf7b40754b0b4dde6b3cd07b0481b2da656a2f\": container with ID starting with 2aa96865122dcdff52c29bf15fbf7b40754b0b4dde6b3cd07b0481b2da656a2f not found: ID does not exist" containerID="2aa96865122dcdff52c29bf15fbf7b40754b0b4dde6b3cd07b0481b2da656a2f" Dec 03 00:30:55 crc kubenswrapper[3561]: I1203 00:30:55.162986 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aa96865122dcdff52c29bf15fbf7b40754b0b4dde6b3cd07b0481b2da656a2f"} err="failed to get container status \"2aa96865122dcdff52c29bf15fbf7b40754b0b4dde6b3cd07b0481b2da656a2f\": rpc error: code = NotFound desc = could not find container \"2aa96865122dcdff52c29bf15fbf7b40754b0b4dde6b3cd07b0481b2da656a2f\": container with ID starting with 2aa96865122dcdff52c29bf15fbf7b40754b0b4dde6b3cd07b0481b2da656a2f not found: ID does not exist" Dec 03 00:30:55 crc kubenswrapper[3561]: I1203 00:30:55.162998 3561 scope.go:117] "RemoveContainer" containerID="3fb68b05418d31552dc7062141080ea0fdebc974f49e61118de80b53f58af0bc" Dec 03 00:30:55 crc kubenswrapper[3561]: E1203 00:30:55.163351 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fb68b05418d31552dc7062141080ea0fdebc974f49e61118de80b53f58af0bc\": container with ID starting with 3fb68b05418d31552dc7062141080ea0fdebc974f49e61118de80b53f58af0bc not found: ID does not exist" containerID="3fb68b05418d31552dc7062141080ea0fdebc974f49e61118de80b53f58af0bc" Dec 03 00:30:55 crc kubenswrapper[3561]: I1203 00:30:55.163413 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fb68b05418d31552dc7062141080ea0fdebc974f49e61118de80b53f58af0bc"} err="failed to get container status \"3fb68b05418d31552dc7062141080ea0fdebc974f49e61118de80b53f58af0bc\": rpc error: code = NotFound desc = could not find container \"3fb68b05418d31552dc7062141080ea0fdebc974f49e61118de80b53f58af0bc\": container with ID starting with 3fb68b05418d31552dc7062141080ea0fdebc974f49e61118de80b53f58af0bc not found: ID does not exist" Dec 03 00:30:55 crc kubenswrapper[3561]: I1203 00:30:55.812225 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/121a2300-661b-4388-b909-010c943f3921-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "121a2300-661b-4388-b909-010c943f3921" (UID: "121a2300-661b-4388-b909-010c943f3921"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:30:55 crc kubenswrapper[3561]: I1203 00:30:55.909041 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/121a2300-661b-4388-b909-010c943f3921-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:56 crc kubenswrapper[3561]: I1203 00:30:56.018986 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7mjx"] Dec 03 00:30:56 crc kubenswrapper[3561]: I1203 00:30:56.024443 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l7mjx"] Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.233242 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.233379 3561 topology_manager.go:215] "Topology Admit Handler" podUID="95070a69-d45b-48fb-be52-380b70da00a3" podNamespace="service-telemetry" podName="prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: E1203 00:30:57.233601 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="121a2300-661b-4388-b909-010c943f3921" containerName="extract-utilities" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.233617 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="121a2300-661b-4388-b909-010c943f3921" containerName="extract-utilities" Dec 03 00:30:57 crc kubenswrapper[3561]: E1203 00:30:57.233632 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="446ecc86-e366-4b5b-93e6-e2e2960b0bcc" containerName="docker-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.233640 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="446ecc86-e366-4b5b-93e6-e2e2960b0bcc" containerName="docker-build" Dec 03 00:30:57 crc kubenswrapper[3561]: E1203 00:30:57.233651 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="446ecc86-e366-4b5b-93e6-e2e2960b0bcc" containerName="manage-dockerfile" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.233659 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="446ecc86-e366-4b5b-93e6-e2e2960b0bcc" containerName="manage-dockerfile" Dec 03 00:30:57 crc kubenswrapper[3561]: E1203 00:30:57.233670 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="121a2300-661b-4388-b909-010c943f3921" containerName="registry-server" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.233678 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="121a2300-661b-4388-b909-010c943f3921" containerName="registry-server" Dec 03 00:30:57 crc kubenswrapper[3561]: E1203 00:30:57.233691 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="121a2300-661b-4388-b909-010c943f3921" containerName="extract-content" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.233699 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="121a2300-661b-4388-b909-010c943f3921" containerName="extract-content" Dec 03 00:30:57 crc kubenswrapper[3561]: E1203 00:30:57.233712 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="446ecc86-e366-4b5b-93e6-e2e2960b0bcc" containerName="git-clone" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.233719 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="446ecc86-e366-4b5b-93e6-e2e2960b0bcc" containerName="git-clone" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.233843 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="446ecc86-e366-4b5b-93e6-e2e2960b0bcc" containerName="docker-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.233860 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="121a2300-661b-4388-b909-010c943f3921" containerName="registry-server" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.234592 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.239592 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.239889 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.239963 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.240220 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-6qmd9" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.261522 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.425747 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.425869 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/95070a69-d45b-48fb-be52-380b70da00a3-builder-dockercfg-6qmd9-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.425937 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.425992 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.426056 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj9k6\" (UniqueName: \"kubernetes.io/projected/95070a69-d45b-48fb-be52-380b70da00a3-kube-api-access-zj9k6\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.426115 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.426216 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.426250 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/95070a69-d45b-48fb-be52-380b70da00a3-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.426279 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/95070a69-d45b-48fb-be52-380b70da00a3-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.426301 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/95070a69-d45b-48fb-be52-380b70da00a3-builder-dockercfg-6qmd9-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.426320 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.426607 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.527302 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.527358 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/95070a69-d45b-48fb-be52-380b70da00a3-builder-dockercfg-6qmd9-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.527384 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.527407 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.527428 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-zj9k6\" (UniqueName: \"kubernetes.io/projected/95070a69-d45b-48fb-be52-380b70da00a3-kube-api-access-zj9k6\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.527449 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.527471 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.528118 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/95070a69-d45b-48fb-be52-380b70da00a3-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.528156 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.528322 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.528366 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/95070a69-d45b-48fb-be52-380b70da00a3-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.528418 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/95070a69-d45b-48fb-be52-380b70da00a3-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.528445 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/95070a69-d45b-48fb-be52-380b70da00a3-builder-dockercfg-6qmd9-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.528451 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.528498 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/95070a69-d45b-48fb-be52-380b70da00a3-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.528535 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.528595 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.528532 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.528716 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.528883 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.528936 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.533037 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/95070a69-d45b-48fb-be52-380b70da00a3-builder-dockercfg-6qmd9-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.533223 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/95070a69-d45b-48fb-be52-380b70da00a3-builder-dockercfg-6qmd9-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.544330 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj9k6\" (UniqueName: \"kubernetes.io/projected/95070a69-d45b-48fb-be52-380b70da00a3-kube-api-access-zj9k6\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.563302 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.623739 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.623826 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.623873 3561 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.624938 3561 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2fe0358891523ffb3867571645f1222796bef04cd6a75ab1c3e21ae15e72601"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.625422 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://f2fe0358891523ffb3867571645f1222796bef04cd6a75ab1c3e21ae15e72601" gracePeriod=600 Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.673138 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="121a2300-661b-4388-b909-010c943f3921" path="/var/lib/kubelet/pods/121a2300-661b-4388-b909-010c943f3921/volumes" Dec 03 00:30:57 crc kubenswrapper[3561]: I1203 00:30:57.777845 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:30:58 crc kubenswrapper[3561]: I1203 00:30:58.071289 3561 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="f2fe0358891523ffb3867571645f1222796bef04cd6a75ab1c3e21ae15e72601" exitCode=0 Dec 03 00:30:58 crc kubenswrapper[3561]: I1203 00:30:58.071497 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"f2fe0358891523ffb3867571645f1222796bef04cd6a75ab1c3e21ae15e72601"} Dec 03 00:30:58 crc kubenswrapper[3561]: I1203 00:30:58.071635 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6"} Dec 03 00:30:58 crc kubenswrapper[3561]: I1203 00:30:58.071655 3561 scope.go:117] "RemoveContainer" containerID="bd361a7b70278e359bf6ff8b6fc3cdd347d465579a05530c586c68c0a2c94f31" Dec 03 00:30:58 crc kubenswrapper[3561]: I1203 00:30:58.075633 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"95070a69-d45b-48fb-be52-380b70da00a3","Type":"ContainerStarted","Data":"00825fa1a526f5de333f4c98a81721da72c01f0be6a146900f03376788a2c55f"} Dec 03 00:30:59 crc kubenswrapper[3561]: I1203 00:30:59.085998 3561 generic.go:334] "Generic (PLEG): container finished" podID="95070a69-d45b-48fb-be52-380b70da00a3" containerID="9e66258d433ff7049c0f438394cc35f0ec064eec46079a857819583422ba2896" exitCode=0 Dec 03 00:30:59 crc kubenswrapper[3561]: I1203 00:30:59.086053 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"95070a69-d45b-48fb-be52-380b70da00a3","Type":"ContainerDied","Data":"9e66258d433ff7049c0f438394cc35f0ec064eec46079a857819583422ba2896"} Dec 03 00:31:00 crc kubenswrapper[3561]: I1203 00:31:00.095222 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"95070a69-d45b-48fb-be52-380b70da00a3","Type":"ContainerStarted","Data":"3904c00eb2e6f97cdc2ad7eb92069bac7be85acd887329fd6475146a1494b072"} Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.326673 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=10.326618394 podStartE2EDuration="10.326618394s" podCreationTimestamp="2025-12-03 00:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:31:00.134346547 +0000 UTC m=+1458.914780845" watchObservedRunningTime="2025-12-03 00:31:07.326618394 +0000 UTC m=+1466.107052662" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.331290 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.331515 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="95070a69-d45b-48fb-be52-380b70da00a3" containerName="docker-build" containerID="cri-o://3904c00eb2e6f97cdc2ad7eb92069bac7be85acd887329fd6475146a1494b072" gracePeriod=30 Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.681647 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_95070a69-d45b-48fb-be52-380b70da00a3/docker-build/0.log" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.682433 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.863682 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-container-storage-root\") pod \"95070a69-d45b-48fb-be52-380b70da00a3\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.864042 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj9k6\" (UniqueName: \"kubernetes.io/projected/95070a69-d45b-48fb-be52-380b70da00a3-kube-api-access-zj9k6\") pod \"95070a69-d45b-48fb-be52-380b70da00a3\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.864067 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/95070a69-d45b-48fb-be52-380b70da00a3-builder-dockercfg-6qmd9-push\") pod \"95070a69-d45b-48fb-be52-380b70da00a3\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.864105 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-buildworkdir\") pod \"95070a69-d45b-48fb-be52-380b70da00a3\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.864177 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/95070a69-d45b-48fb-be52-380b70da00a3-buildcachedir\") pod \"95070a69-d45b-48fb-be52-380b70da00a3\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.864222 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-ca-bundles\") pod \"95070a69-d45b-48fb-be52-380b70da00a3\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.864294 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/95070a69-d45b-48fb-be52-380b70da00a3-builder-dockercfg-6qmd9-pull\") pod \"95070a69-d45b-48fb-be52-380b70da00a3\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.864327 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-system-configs\") pod \"95070a69-d45b-48fb-be52-380b70da00a3\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.864315 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95070a69-d45b-48fb-be52-380b70da00a3-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "95070a69-d45b-48fb-be52-380b70da00a3" (UID: "95070a69-d45b-48fb-be52-380b70da00a3"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.864363 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/95070a69-d45b-48fb-be52-380b70da00a3-node-pullsecrets\") pod \"95070a69-d45b-48fb-be52-380b70da00a3\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.864402 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-proxy-ca-bundles\") pod \"95070a69-d45b-48fb-be52-380b70da00a3\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.864435 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-build-blob-cache\") pod \"95070a69-d45b-48fb-be52-380b70da00a3\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.864475 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-container-storage-run\") pod \"95070a69-d45b-48fb-be52-380b70da00a3\" (UID: \"95070a69-d45b-48fb-be52-380b70da00a3\") " Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.864526 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95070a69-d45b-48fb-be52-380b70da00a3-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "95070a69-d45b-48fb-be52-380b70da00a3" (UID: "95070a69-d45b-48fb-be52-380b70da00a3"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.864726 3561 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/95070a69-d45b-48fb-be52-380b70da00a3-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.864744 3561 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/95070a69-d45b-48fb-be52-380b70da00a3-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.865319 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "95070a69-d45b-48fb-be52-380b70da00a3" (UID: "95070a69-d45b-48fb-be52-380b70da00a3"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.865345 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "95070a69-d45b-48fb-be52-380b70da00a3" (UID: "95070a69-d45b-48fb-be52-380b70da00a3"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.865751 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "95070a69-d45b-48fb-be52-380b70da00a3" (UID: "95070a69-d45b-48fb-be52-380b70da00a3"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.866060 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "95070a69-d45b-48fb-be52-380b70da00a3" (UID: "95070a69-d45b-48fb-be52-380b70da00a3"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.866109 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "95070a69-d45b-48fb-be52-380b70da00a3" (UID: "95070a69-d45b-48fb-be52-380b70da00a3"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.870076 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95070a69-d45b-48fb-be52-380b70da00a3-builder-dockercfg-6qmd9-push" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-push") pod "95070a69-d45b-48fb-be52-380b70da00a3" (UID: "95070a69-d45b-48fb-be52-380b70da00a3"). InnerVolumeSpecName "builder-dockercfg-6qmd9-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.870897 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95070a69-d45b-48fb-be52-380b70da00a3-kube-api-access-zj9k6" (OuterVolumeSpecName: "kube-api-access-zj9k6") pod "95070a69-d45b-48fb-be52-380b70da00a3" (UID: "95070a69-d45b-48fb-be52-380b70da00a3"). InnerVolumeSpecName "kube-api-access-zj9k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.871650 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95070a69-d45b-48fb-be52-380b70da00a3-builder-dockercfg-6qmd9-pull" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-pull") pod "95070a69-d45b-48fb-be52-380b70da00a3" (UID: "95070a69-d45b-48fb-be52-380b70da00a3"). InnerVolumeSpecName "builder-dockercfg-6qmd9-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.921304 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "95070a69-d45b-48fb-be52-380b70da00a3" (UID: "95070a69-d45b-48fb-be52-380b70da00a3"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.967102 3561 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.967139 3561 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.967155 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/95070a69-d45b-48fb-be52-380b70da00a3-builder-dockercfg-6qmd9-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.967169 3561 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.967181 3561 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95070a69-d45b-48fb-be52-380b70da00a3-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.967193 3561 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.967204 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.967215 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zj9k6\" (UniqueName: \"kubernetes.io/projected/95070a69-d45b-48fb-be52-380b70da00a3-kube-api-access-zj9k6\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:07 crc kubenswrapper[3561]: I1203 00:31:07.967227 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/95070a69-d45b-48fb-be52-380b70da00a3-builder-dockercfg-6qmd9-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:08 crc kubenswrapper[3561]: I1203 00:31:08.151601 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_95070a69-d45b-48fb-be52-380b70da00a3/docker-build/0.log" Dec 03 00:31:08 crc kubenswrapper[3561]: I1203 00:31:08.152270 3561 generic.go:334] "Generic (PLEG): container finished" podID="95070a69-d45b-48fb-be52-380b70da00a3" containerID="3904c00eb2e6f97cdc2ad7eb92069bac7be85acd887329fd6475146a1494b072" exitCode=1 Dec 03 00:31:08 crc kubenswrapper[3561]: I1203 00:31:08.152300 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:31:08 crc kubenswrapper[3561]: I1203 00:31:08.152318 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"95070a69-d45b-48fb-be52-380b70da00a3","Type":"ContainerDied","Data":"3904c00eb2e6f97cdc2ad7eb92069bac7be85acd887329fd6475146a1494b072"} Dec 03 00:31:08 crc kubenswrapper[3561]: I1203 00:31:08.152366 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"95070a69-d45b-48fb-be52-380b70da00a3","Type":"ContainerDied","Data":"00825fa1a526f5de333f4c98a81721da72c01f0be6a146900f03376788a2c55f"} Dec 03 00:31:08 crc kubenswrapper[3561]: I1203 00:31:08.152400 3561 scope.go:117] "RemoveContainer" containerID="3904c00eb2e6f97cdc2ad7eb92069bac7be85acd887329fd6475146a1494b072" Dec 03 00:31:08 crc kubenswrapper[3561]: I1203 00:31:08.199868 3561 scope.go:117] "RemoveContainer" containerID="9e66258d433ff7049c0f438394cc35f0ec064eec46079a857819583422ba2896" Dec 03 00:31:08 crc kubenswrapper[3561]: I1203 00:31:08.200259 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "95070a69-d45b-48fb-be52-380b70da00a3" (UID: "95070a69-d45b-48fb-be52-380b70da00a3"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:31:08 crc kubenswrapper[3561]: I1203 00:31:08.249882 3561 scope.go:117] "RemoveContainer" containerID="3904c00eb2e6f97cdc2ad7eb92069bac7be85acd887329fd6475146a1494b072" Dec 03 00:31:08 crc kubenswrapper[3561]: E1203 00:31:08.250370 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3904c00eb2e6f97cdc2ad7eb92069bac7be85acd887329fd6475146a1494b072\": container with ID starting with 3904c00eb2e6f97cdc2ad7eb92069bac7be85acd887329fd6475146a1494b072 not found: ID does not exist" containerID="3904c00eb2e6f97cdc2ad7eb92069bac7be85acd887329fd6475146a1494b072" Dec 03 00:31:08 crc kubenswrapper[3561]: I1203 00:31:08.250418 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3904c00eb2e6f97cdc2ad7eb92069bac7be85acd887329fd6475146a1494b072"} err="failed to get container status \"3904c00eb2e6f97cdc2ad7eb92069bac7be85acd887329fd6475146a1494b072\": rpc error: code = NotFound desc = could not find container \"3904c00eb2e6f97cdc2ad7eb92069bac7be85acd887329fd6475146a1494b072\": container with ID starting with 3904c00eb2e6f97cdc2ad7eb92069bac7be85acd887329fd6475146a1494b072 not found: ID does not exist" Dec 03 00:31:08 crc kubenswrapper[3561]: I1203 00:31:08.250430 3561 scope.go:117] "RemoveContainer" containerID="9e66258d433ff7049c0f438394cc35f0ec064eec46079a857819583422ba2896" Dec 03 00:31:08 crc kubenswrapper[3561]: E1203 00:31:08.251082 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e66258d433ff7049c0f438394cc35f0ec064eec46079a857819583422ba2896\": container with ID starting with 9e66258d433ff7049c0f438394cc35f0ec064eec46079a857819583422ba2896 not found: ID does not exist" containerID="9e66258d433ff7049c0f438394cc35f0ec064eec46079a857819583422ba2896" Dec 03 00:31:08 crc kubenswrapper[3561]: I1203 00:31:08.251147 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e66258d433ff7049c0f438394cc35f0ec064eec46079a857819583422ba2896"} err="failed to get container status \"9e66258d433ff7049c0f438394cc35f0ec064eec46079a857819583422ba2896\": rpc error: code = NotFound desc = could not find container \"9e66258d433ff7049c0f438394cc35f0ec064eec46079a857819583422ba2896\": container with ID starting with 9e66258d433ff7049c0f438394cc35f0ec064eec46079a857819583422ba2896 not found: ID does not exist" Dec 03 00:31:08 crc kubenswrapper[3561]: I1203 00:31:08.270835 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/95070a69-d45b-48fb-be52-380b70da00a3-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:08 crc kubenswrapper[3561]: I1203 00:31:08.487571 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:31:08 crc kubenswrapper[3561]: I1203 00:31:08.500668 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.017133 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.017353 3561 topology_manager.go:215] "Topology Admit Handler" podUID="49406d08-c62f-420a-8d0c-91c0da6675f1" podNamespace="service-telemetry" podName="prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: E1203 00:31:09.017792 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="95070a69-d45b-48fb-be52-380b70da00a3" containerName="docker-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.017840 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="95070a69-d45b-48fb-be52-380b70da00a3" containerName="docker-build" Dec 03 00:31:09 crc kubenswrapper[3561]: E1203 00:31:09.017896 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="95070a69-d45b-48fb-be52-380b70da00a3" containerName="manage-dockerfile" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.017920 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="95070a69-d45b-48fb-be52-380b70da00a3" containerName="manage-dockerfile" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.018220 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="95070a69-d45b-48fb-be52-380b70da00a3" containerName="docker-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.019848 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.024058 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.024094 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.024123 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.024238 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.024453 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-6qmd9" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.186128 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.186395 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.186470 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/49406d08-c62f-420a-8d0c-91c0da6675f1-builder-dockercfg-6qmd9-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.186496 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltg9q\" (UniqueName: \"kubernetes.io/projected/49406d08-c62f-420a-8d0c-91c0da6675f1-kube-api-access-ltg9q\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.186520 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.186555 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.186584 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/49406d08-c62f-420a-8d0c-91c0da6675f1-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.186611 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.186642 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/49406d08-c62f-420a-8d0c-91c0da6675f1-builder-dockercfg-6qmd9-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.186680 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.186708 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/49406d08-c62f-420a-8d0c-91c0da6675f1-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.186733 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.287764 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.287814 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/49406d08-c62f-420a-8d0c-91c0da6675f1-builder-dockercfg-6qmd9-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.287848 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.287868 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/49406d08-c62f-420a-8d0c-91c0da6675f1-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.287886 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.287912 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.287931 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.287959 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/49406d08-c62f-420a-8d0c-91c0da6675f1-builder-dockercfg-6qmd9-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.287981 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ltg9q\" (UniqueName: \"kubernetes.io/projected/49406d08-c62f-420a-8d0c-91c0da6675f1-kube-api-access-ltg9q\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.288012 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.288035 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.288062 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/49406d08-c62f-420a-8d0c-91c0da6675f1-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.288148 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/49406d08-c62f-420a-8d0c-91c0da6675f1-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.288744 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.288792 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.288799 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/49406d08-c62f-420a-8d0c-91c0da6675f1-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.289007 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.289070 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.289606 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.289805 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.290059 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.293899 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/49406d08-c62f-420a-8d0c-91c0da6675f1-builder-dockercfg-6qmd9-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.295432 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/49406d08-c62f-420a-8d0c-91c0da6675f1-builder-dockercfg-6qmd9-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.309129 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltg9q\" (UniqueName: \"kubernetes.io/projected/49406d08-c62f-420a-8d0c-91c0da6675f1-kube-api-access-ltg9q\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.365081 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.676377 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95070a69-d45b-48fb-be52-380b70da00a3" path="/var/lib/kubelet/pods/95070a69-d45b-48fb-be52-380b70da00a3/volumes" Dec 03 00:31:09 crc kubenswrapper[3561]: I1203 00:31:09.681350 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Dec 03 00:31:10 crc kubenswrapper[3561]: I1203 00:31:10.170475 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"49406d08-c62f-420a-8d0c-91c0da6675f1","Type":"ContainerStarted","Data":"3d32f9c56e10241b923cc6781dbc29d7976c7697c7e3cd160caf0ce245bf1d97"} Dec 03 00:31:11 crc kubenswrapper[3561]: I1203 00:31:11.180941 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"49406d08-c62f-420a-8d0c-91c0da6675f1","Type":"ContainerStarted","Data":"a95472e99bd6ab20f11f8e6306b72d4a45c446d12cdc00ef05c4f7aace61b10c"} Dec 03 00:31:11 crc kubenswrapper[3561]: E1203 00:31:11.295900 3561 upgradeaware.go:439] Error proxying data from backend to client: write tcp 192.168.126.11:10250->192.168.126.11:51194: write: broken pipe Dec 03 00:31:12 crc kubenswrapper[3561]: I1203 00:31:12.191252 3561 generic.go:334] "Generic (PLEG): container finished" podID="49406d08-c62f-420a-8d0c-91c0da6675f1" containerID="a95472e99bd6ab20f11f8e6306b72d4a45c446d12cdc00ef05c4f7aace61b10c" exitCode=0 Dec 03 00:31:12 crc kubenswrapper[3561]: I1203 00:31:12.191369 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"49406d08-c62f-420a-8d0c-91c0da6675f1","Type":"ContainerDied","Data":"a95472e99bd6ab20f11f8e6306b72d4a45c446d12cdc00ef05c4f7aace61b10c"} Dec 03 00:31:13 crc kubenswrapper[3561]: I1203 00:31:13.197443 3561 generic.go:334] "Generic (PLEG): container finished" podID="49406d08-c62f-420a-8d0c-91c0da6675f1" containerID="f143146449b59ca6f9372be51e71969d329764873155f05a913b27c8afb7311c" exitCode=0 Dec 03 00:31:13 crc kubenswrapper[3561]: I1203 00:31:13.197483 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"49406d08-c62f-420a-8d0c-91c0da6675f1","Type":"ContainerDied","Data":"f143146449b59ca6f9372be51e71969d329764873155f05a913b27c8afb7311c"} Dec 03 00:31:13 crc kubenswrapper[3561]: I1203 00:31:13.255923 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_49406d08-c62f-420a-8d0c-91c0da6675f1/manage-dockerfile/0.log" Dec 03 00:31:14 crc kubenswrapper[3561]: I1203 00:31:14.209106 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"49406d08-c62f-420a-8d0c-91c0da6675f1","Type":"ContainerStarted","Data":"4094d448701a35b8fdfcc179ecd10a55669e716d2ff52b9d15b0a8bb84716891"} Dec 03 00:31:14 crc kubenswrapper[3561]: I1203 00:31:14.240624 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=6.240538218 podStartE2EDuration="6.240538218s" podCreationTimestamp="2025-12-03 00:31:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:31:14.233130725 +0000 UTC m=+1473.013565013" watchObservedRunningTime="2025-12-03 00:31:14.240538218 +0000 UTC m=+1473.021000626" Dec 03 00:31:41 crc kubenswrapper[3561]: I1203 00:31:41.581395 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:31:41 crc kubenswrapper[3561]: I1203 00:31:41.581941 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:31:41 crc kubenswrapper[3561]: I1203 00:31:41.581975 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:31:41 crc kubenswrapper[3561]: I1203 00:31:41.582013 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:31:41 crc kubenswrapper[3561]: I1203 00:31:41.582031 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:32:15 crc kubenswrapper[3561]: I1203 00:32:15.598285 3561 generic.go:334] "Generic (PLEG): container finished" podID="49406d08-c62f-420a-8d0c-91c0da6675f1" containerID="4094d448701a35b8fdfcc179ecd10a55669e716d2ff52b9d15b0a8bb84716891" exitCode=0 Dec 03 00:32:15 crc kubenswrapper[3561]: I1203 00:32:15.598376 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"49406d08-c62f-420a-8d0c-91c0da6675f1","Type":"ContainerDied","Data":"4094d448701a35b8fdfcc179ecd10a55669e716d2ff52b9d15b0a8bb84716891"} Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.877279 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.962991 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-ca-bundles\") pod \"49406d08-c62f-420a-8d0c-91c0da6675f1\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.963060 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-build-blob-cache\") pod \"49406d08-c62f-420a-8d0c-91c0da6675f1\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.963108 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-container-storage-run\") pod \"49406d08-c62f-420a-8d0c-91c0da6675f1\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.963147 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltg9q\" (UniqueName: \"kubernetes.io/projected/49406d08-c62f-420a-8d0c-91c0da6675f1-kube-api-access-ltg9q\") pod \"49406d08-c62f-420a-8d0c-91c0da6675f1\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.963236 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-buildworkdir\") pod \"49406d08-c62f-420a-8d0c-91c0da6675f1\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.963261 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/49406d08-c62f-420a-8d0c-91c0da6675f1-node-pullsecrets\") pod \"49406d08-c62f-420a-8d0c-91c0da6675f1\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.963291 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-system-configs\") pod \"49406d08-c62f-420a-8d0c-91c0da6675f1\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.963317 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/49406d08-c62f-420a-8d0c-91c0da6675f1-builder-dockercfg-6qmd9-push\") pod \"49406d08-c62f-420a-8d0c-91c0da6675f1\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.963356 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-container-storage-root\") pod \"49406d08-c62f-420a-8d0c-91c0da6675f1\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.963399 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-proxy-ca-bundles\") pod \"49406d08-c62f-420a-8d0c-91c0da6675f1\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.963440 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/49406d08-c62f-420a-8d0c-91c0da6675f1-builder-dockercfg-6qmd9-pull\") pod \"49406d08-c62f-420a-8d0c-91c0da6675f1\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.963498 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/49406d08-c62f-420a-8d0c-91c0da6675f1-buildcachedir\") pod \"49406d08-c62f-420a-8d0c-91c0da6675f1\" (UID: \"49406d08-c62f-420a-8d0c-91c0da6675f1\") " Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.963760 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49406d08-c62f-420a-8d0c-91c0da6675f1-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "49406d08-c62f-420a-8d0c-91c0da6675f1" (UID: "49406d08-c62f-420a-8d0c-91c0da6675f1"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.964260 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "49406d08-c62f-420a-8d0c-91c0da6675f1" (UID: "49406d08-c62f-420a-8d0c-91c0da6675f1"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.964583 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49406d08-c62f-420a-8d0c-91c0da6675f1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "49406d08-c62f-420a-8d0c-91c0da6675f1" (UID: "49406d08-c62f-420a-8d0c-91c0da6675f1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.964775 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "49406d08-c62f-420a-8d0c-91c0da6675f1" (UID: "49406d08-c62f-420a-8d0c-91c0da6675f1"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.965109 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "49406d08-c62f-420a-8d0c-91c0da6675f1" (UID: "49406d08-c62f-420a-8d0c-91c0da6675f1"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.965228 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "49406d08-c62f-420a-8d0c-91c0da6675f1" (UID: "49406d08-c62f-420a-8d0c-91c0da6675f1"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.967522 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "49406d08-c62f-420a-8d0c-91c0da6675f1" (UID: "49406d08-c62f-420a-8d0c-91c0da6675f1"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.974723 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49406d08-c62f-420a-8d0c-91c0da6675f1-kube-api-access-ltg9q" (OuterVolumeSpecName: "kube-api-access-ltg9q") pod "49406d08-c62f-420a-8d0c-91c0da6675f1" (UID: "49406d08-c62f-420a-8d0c-91c0da6675f1"). InnerVolumeSpecName "kube-api-access-ltg9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.974880 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49406d08-c62f-420a-8d0c-91c0da6675f1-builder-dockercfg-6qmd9-push" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-push") pod "49406d08-c62f-420a-8d0c-91c0da6675f1" (UID: "49406d08-c62f-420a-8d0c-91c0da6675f1"). InnerVolumeSpecName "builder-dockercfg-6qmd9-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:32:16 crc kubenswrapper[3561]: I1203 00:32:16.978533 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49406d08-c62f-420a-8d0c-91c0da6675f1-builder-dockercfg-6qmd9-pull" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-pull") pod "49406d08-c62f-420a-8d0c-91c0da6675f1" (UID: "49406d08-c62f-420a-8d0c-91c0da6675f1"). InnerVolumeSpecName "builder-dockercfg-6qmd9-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:32:17 crc kubenswrapper[3561]: I1203 00:32:17.065450 3561 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:17 crc kubenswrapper[3561]: I1203 00:32:17.065494 3561 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:17 crc kubenswrapper[3561]: I1203 00:32:17.065509 3561 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/49406d08-c62f-420a-8d0c-91c0da6675f1-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:17 crc kubenswrapper[3561]: I1203 00:32:17.065523 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/49406d08-c62f-420a-8d0c-91c0da6675f1-builder-dockercfg-6qmd9-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:17 crc kubenswrapper[3561]: I1203 00:32:17.065536 3561 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:17 crc kubenswrapper[3561]: I1203 00:32:17.065567 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/49406d08-c62f-420a-8d0c-91c0da6675f1-builder-dockercfg-6qmd9-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:17 crc kubenswrapper[3561]: I1203 00:32:17.065580 3561 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/49406d08-c62f-420a-8d0c-91c0da6675f1-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:17 crc kubenswrapper[3561]: I1203 00:32:17.065592 3561 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49406d08-c62f-420a-8d0c-91c0da6675f1-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:17 crc kubenswrapper[3561]: I1203 00:32:17.065605 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:17 crc kubenswrapper[3561]: I1203 00:32:17.065622 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ltg9q\" (UniqueName: \"kubernetes.io/projected/49406d08-c62f-420a-8d0c-91c0da6675f1-kube-api-access-ltg9q\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:17 crc kubenswrapper[3561]: I1203 00:32:17.100725 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "49406d08-c62f-420a-8d0c-91c0da6675f1" (UID: "49406d08-c62f-420a-8d0c-91c0da6675f1"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:17 crc kubenswrapper[3561]: I1203 00:32:17.166494 3561 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:17 crc kubenswrapper[3561]: I1203 00:32:17.628736 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"49406d08-c62f-420a-8d0c-91c0da6675f1","Type":"ContainerDied","Data":"3d32f9c56e10241b923cc6781dbc29d7976c7697c7e3cd160caf0ce245bf1d97"} Dec 03 00:32:17 crc kubenswrapper[3561]: I1203 00:32:17.628768 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d32f9c56e10241b923cc6781dbc29d7976c7697c7e3cd160caf0ce245bf1d97" Dec 03 00:32:17 crc kubenswrapper[3561]: I1203 00:32:17.628796 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:32:17 crc kubenswrapper[3561]: I1203 00:32:17.915989 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "49406d08-c62f-420a-8d0c-91c0da6675f1" (UID: "49406d08-c62f-420a-8d0c-91c0da6675f1"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:17 crc kubenswrapper[3561]: I1203 00:32:17.975996 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/49406d08-c62f-420a-8d0c-91c0da6675f1-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.876466 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.877031 3561 topology_manager.go:215] "Topology Admit Handler" podUID="44b544d7-e90d-42be-b768-98aecf749387" podNamespace="service-telemetry" podName="service-telemetry-operator-bundle-1-build" Dec 03 00:32:26 crc kubenswrapper[3561]: E1203 00:32:26.877189 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="49406d08-c62f-420a-8d0c-91c0da6675f1" containerName="git-clone" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.877201 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="49406d08-c62f-420a-8d0c-91c0da6675f1" containerName="git-clone" Dec 03 00:32:26 crc kubenswrapper[3561]: E1203 00:32:26.877212 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="49406d08-c62f-420a-8d0c-91c0da6675f1" containerName="manage-dockerfile" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.877219 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="49406d08-c62f-420a-8d0c-91c0da6675f1" containerName="manage-dockerfile" Dec 03 00:32:26 crc kubenswrapper[3561]: E1203 00:32:26.877234 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="49406d08-c62f-420a-8d0c-91c0da6675f1" containerName="docker-build" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.877242 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="49406d08-c62f-420a-8d0c-91c0da6675f1" containerName="docker-build" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.877373 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="49406d08-c62f-420a-8d0c-91c0da6675f1" containerName="docker-build" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.877995 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.880330 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-sys-config" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.880459 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-global-ca" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.880681 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-ca" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.880765 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-6qmd9" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.896409 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.988873 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.988955 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/44b544d7-e90d-42be-b768-98aecf749387-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.988990 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.989017 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.989044 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.989074 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/44b544d7-e90d-42be-b768-98aecf749387-builder-dockercfg-6qmd9-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.989098 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/44b544d7-e90d-42be-b768-98aecf749387-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.989134 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.989174 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml4r9\" (UniqueName: \"kubernetes.io/projected/44b544d7-e90d-42be-b768-98aecf749387-kube-api-access-ml4r9\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.989201 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.989230 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:26 crc kubenswrapper[3561]: I1203 00:32:26.989263 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/44b544d7-e90d-42be-b768-98aecf749387-builder-dockercfg-6qmd9-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.090428 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.090479 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/44b544d7-e90d-42be-b768-98aecf749387-builder-dockercfg-6qmd9-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.090519 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.090562 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/44b544d7-e90d-42be-b768-98aecf749387-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.090584 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.090824 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.090892 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.091001 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/44b544d7-e90d-42be-b768-98aecf749387-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.091054 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/44b544d7-e90d-42be-b768-98aecf749387-builder-dockercfg-6qmd9-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.091417 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.091390 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/44b544d7-e90d-42be-b768-98aecf749387-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.091602 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/44b544d7-e90d-42be-b768-98aecf749387-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.091635 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.091756 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ml4r9\" (UniqueName: \"kubernetes.io/projected/44b544d7-e90d-42be-b768-98aecf749387-kube-api-access-ml4r9\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.091823 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.092134 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.092388 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.092389 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.092760 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.092785 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.092794 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.098669 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/44b544d7-e90d-42be-b768-98aecf749387-builder-dockercfg-6qmd9-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.099944 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/44b544d7-e90d-42be-b768-98aecf749387-builder-dockercfg-6qmd9-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.119325 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml4r9\" (UniqueName: \"kubernetes.io/projected/44b544d7-e90d-42be-b768-98aecf749387-kube-api-access-ml4r9\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.194250 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.424334 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 03 00:32:27 crc kubenswrapper[3561]: I1203 00:32:27.691106 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"44b544d7-e90d-42be-b768-98aecf749387","Type":"ContainerStarted","Data":"793f5e4cfdec8460f1bb50f592af7d3a9b7c03130cdf7b57c1f02a3444712b6b"} Dec 03 00:32:28 crc kubenswrapper[3561]: I1203 00:32:28.698732 3561 generic.go:334] "Generic (PLEG): container finished" podID="44b544d7-e90d-42be-b768-98aecf749387" containerID="71927c3c8b8941f69e8b72d9bd715129975c1476ae34fed28dd627be071ac998" exitCode=0 Dec 03 00:32:28 crc kubenswrapper[3561]: I1203 00:32:28.698798 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"44b544d7-e90d-42be-b768-98aecf749387","Type":"ContainerDied","Data":"71927c3c8b8941f69e8b72d9bd715129975c1476ae34fed28dd627be071ac998"} Dec 03 00:32:28 crc kubenswrapper[3561]: I1203 00:32:28.986321 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bkp2q"] Dec 03 00:32:28 crc kubenswrapper[3561]: I1203 00:32:28.986453 3561 topology_manager.go:215] "Topology Admit Handler" podUID="6eff3467-d292-40dc-a81e-90a6334dd439" podNamespace="openshift-marketplace" podName="certified-operators-bkp2q" Dec 03 00:32:28 crc kubenswrapper[3561]: I1203 00:32:28.987736 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkp2q" Dec 03 00:32:28 crc kubenswrapper[3561]: I1203 00:32:28.998958 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bkp2q"] Dec 03 00:32:29 crc kubenswrapper[3561]: I1203 00:32:29.115768 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eff3467-d292-40dc-a81e-90a6334dd439-catalog-content\") pod \"certified-operators-bkp2q\" (UID: \"6eff3467-d292-40dc-a81e-90a6334dd439\") " pod="openshift-marketplace/certified-operators-bkp2q" Dec 03 00:32:29 crc kubenswrapper[3561]: I1203 00:32:29.115860 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6p62\" (UniqueName: \"kubernetes.io/projected/6eff3467-d292-40dc-a81e-90a6334dd439-kube-api-access-g6p62\") pod \"certified-operators-bkp2q\" (UID: \"6eff3467-d292-40dc-a81e-90a6334dd439\") " pod="openshift-marketplace/certified-operators-bkp2q" Dec 03 00:32:29 crc kubenswrapper[3561]: I1203 00:32:29.115900 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eff3467-d292-40dc-a81e-90a6334dd439-utilities\") pod \"certified-operators-bkp2q\" (UID: \"6eff3467-d292-40dc-a81e-90a6334dd439\") " pod="openshift-marketplace/certified-operators-bkp2q" Dec 03 00:32:29 crc kubenswrapper[3561]: I1203 00:32:29.217117 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-g6p62\" (UniqueName: \"kubernetes.io/projected/6eff3467-d292-40dc-a81e-90a6334dd439-kube-api-access-g6p62\") pod \"certified-operators-bkp2q\" (UID: \"6eff3467-d292-40dc-a81e-90a6334dd439\") " pod="openshift-marketplace/certified-operators-bkp2q" Dec 03 00:32:29 crc kubenswrapper[3561]: I1203 00:32:29.217201 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eff3467-d292-40dc-a81e-90a6334dd439-utilities\") pod \"certified-operators-bkp2q\" (UID: \"6eff3467-d292-40dc-a81e-90a6334dd439\") " pod="openshift-marketplace/certified-operators-bkp2q" Dec 03 00:32:29 crc kubenswrapper[3561]: I1203 00:32:29.217262 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eff3467-d292-40dc-a81e-90a6334dd439-catalog-content\") pod \"certified-operators-bkp2q\" (UID: \"6eff3467-d292-40dc-a81e-90a6334dd439\") " pod="openshift-marketplace/certified-operators-bkp2q" Dec 03 00:32:29 crc kubenswrapper[3561]: I1203 00:32:29.217854 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eff3467-d292-40dc-a81e-90a6334dd439-catalog-content\") pod \"certified-operators-bkp2q\" (UID: \"6eff3467-d292-40dc-a81e-90a6334dd439\") " pod="openshift-marketplace/certified-operators-bkp2q" Dec 03 00:32:29 crc kubenswrapper[3561]: I1203 00:32:29.218561 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eff3467-d292-40dc-a81e-90a6334dd439-utilities\") pod \"certified-operators-bkp2q\" (UID: \"6eff3467-d292-40dc-a81e-90a6334dd439\") " pod="openshift-marketplace/certified-operators-bkp2q" Dec 03 00:32:29 crc kubenswrapper[3561]: I1203 00:32:29.243747 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6p62\" (UniqueName: \"kubernetes.io/projected/6eff3467-d292-40dc-a81e-90a6334dd439-kube-api-access-g6p62\") pod \"certified-operators-bkp2q\" (UID: \"6eff3467-d292-40dc-a81e-90a6334dd439\") " pod="openshift-marketplace/certified-operators-bkp2q" Dec 03 00:32:29 crc kubenswrapper[3561]: I1203 00:32:29.303244 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkp2q" Dec 03 00:32:29 crc kubenswrapper[3561]: I1203 00:32:29.557657 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bkp2q"] Dec 03 00:32:29 crc kubenswrapper[3561]: W1203 00:32:29.562993 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eff3467_d292_40dc_a81e_90a6334dd439.slice/crio-cc06df6ad20d8470585bf25a89bd57d518e474c0cb666bbcbf5c8d655c08ad9c WatchSource:0}: Error finding container cc06df6ad20d8470585bf25a89bd57d518e474c0cb666bbcbf5c8d655c08ad9c: Status 404 returned error can't find the container with id cc06df6ad20d8470585bf25a89bd57d518e474c0cb666bbcbf5c8d655c08ad9c Dec 03 00:32:29 crc kubenswrapper[3561]: I1203 00:32:29.706637 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_44b544d7-e90d-42be-b768-98aecf749387/docker-build/0.log" Dec 03 00:32:29 crc kubenswrapper[3561]: I1203 00:32:29.707373 3561 generic.go:334] "Generic (PLEG): container finished" podID="44b544d7-e90d-42be-b768-98aecf749387" containerID="49e983dafb5f51b8777452a1dfe96b15adef931815762ab166e94c56378ef69c" exitCode=1 Dec 03 00:32:29 crc kubenswrapper[3561]: I1203 00:32:29.707431 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"44b544d7-e90d-42be-b768-98aecf749387","Type":"ContainerDied","Data":"49e983dafb5f51b8777452a1dfe96b15adef931815762ab166e94c56378ef69c"} Dec 03 00:32:29 crc kubenswrapper[3561]: I1203 00:32:29.708819 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkp2q" event={"ID":"6eff3467-d292-40dc-a81e-90a6334dd439","Type":"ContainerStarted","Data":"cc06df6ad20d8470585bf25a89bd57d518e474c0cb666bbcbf5c8d655c08ad9c"} Dec 03 00:32:30 crc kubenswrapper[3561]: I1203 00:32:30.715044 3561 generic.go:334] "Generic (PLEG): container finished" podID="6eff3467-d292-40dc-a81e-90a6334dd439" containerID="796c6427c215e98f4c576fe0a3125b713f330d5e9b0c335abc508c8cceba986f" exitCode=0 Dec 03 00:32:30 crc kubenswrapper[3561]: I1203 00:32:30.715139 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkp2q" event={"ID":"6eff3467-d292-40dc-a81e-90a6334dd439","Type":"ContainerDied","Data":"796c6427c215e98f4c576fe0a3125b713f330d5e9b0c335abc508c8cceba986f"} Dec 03 00:32:30 crc kubenswrapper[3561]: I1203 00:32:30.898750 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_44b544d7-e90d-42be-b768-98aecf749387/docker-build/0.log" Dec 03 00:32:30 crc kubenswrapper[3561]: I1203 00:32:30.899732 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.044644 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-ca-bundles\") pod \"44b544d7-e90d-42be-b768-98aecf749387\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.044692 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-build-blob-cache\") pod \"44b544d7-e90d-42be-b768-98aecf749387\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.044719 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/44b544d7-e90d-42be-b768-98aecf749387-node-pullsecrets\") pod \"44b544d7-e90d-42be-b768-98aecf749387\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.044750 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-proxy-ca-bundles\") pod \"44b544d7-e90d-42be-b768-98aecf749387\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.044773 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/44b544d7-e90d-42be-b768-98aecf749387-builder-dockercfg-6qmd9-pull\") pod \"44b544d7-e90d-42be-b768-98aecf749387\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.044790 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-container-storage-root\") pod \"44b544d7-e90d-42be-b768-98aecf749387\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.044819 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml4r9\" (UniqueName: \"kubernetes.io/projected/44b544d7-e90d-42be-b768-98aecf749387-kube-api-access-ml4r9\") pod \"44b544d7-e90d-42be-b768-98aecf749387\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.044843 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/44b544d7-e90d-42be-b768-98aecf749387-buildcachedir\") pod \"44b544d7-e90d-42be-b768-98aecf749387\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.044869 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/44b544d7-e90d-42be-b768-98aecf749387-builder-dockercfg-6qmd9-push\") pod \"44b544d7-e90d-42be-b768-98aecf749387\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.044898 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-buildworkdir\") pod \"44b544d7-e90d-42be-b768-98aecf749387\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.044927 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-container-storage-run\") pod \"44b544d7-e90d-42be-b768-98aecf749387\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.045007 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-system-configs\") pod \"44b544d7-e90d-42be-b768-98aecf749387\" (UID: \"44b544d7-e90d-42be-b768-98aecf749387\") " Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.045401 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44b544d7-e90d-42be-b768-98aecf749387-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "44b544d7-e90d-42be-b768-98aecf749387" (UID: "44b544d7-e90d-42be-b768-98aecf749387"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.045623 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44b544d7-e90d-42be-b768-98aecf749387-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "44b544d7-e90d-42be-b768-98aecf749387" (UID: "44b544d7-e90d-42be-b768-98aecf749387"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.045878 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "44b544d7-e90d-42be-b768-98aecf749387" (UID: "44b544d7-e90d-42be-b768-98aecf749387"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.046083 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "44b544d7-e90d-42be-b768-98aecf749387" (UID: "44b544d7-e90d-42be-b768-98aecf749387"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.046188 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "44b544d7-e90d-42be-b768-98aecf749387" (UID: "44b544d7-e90d-42be-b768-98aecf749387"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.046220 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "44b544d7-e90d-42be-b768-98aecf749387" (UID: "44b544d7-e90d-42be-b768-98aecf749387"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.046244 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "44b544d7-e90d-42be-b768-98aecf749387" (UID: "44b544d7-e90d-42be-b768-98aecf749387"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.046894 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "44b544d7-e90d-42be-b768-98aecf749387" (UID: "44b544d7-e90d-42be-b768-98aecf749387"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.047174 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "44b544d7-e90d-42be-b768-98aecf749387" (UID: "44b544d7-e90d-42be-b768-98aecf749387"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.050114 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44b544d7-e90d-42be-b768-98aecf749387-kube-api-access-ml4r9" (OuterVolumeSpecName: "kube-api-access-ml4r9") pod "44b544d7-e90d-42be-b768-98aecf749387" (UID: "44b544d7-e90d-42be-b768-98aecf749387"). InnerVolumeSpecName "kube-api-access-ml4r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.050110 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44b544d7-e90d-42be-b768-98aecf749387-builder-dockercfg-6qmd9-pull" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-pull") pod "44b544d7-e90d-42be-b768-98aecf749387" (UID: "44b544d7-e90d-42be-b768-98aecf749387"). InnerVolumeSpecName "builder-dockercfg-6qmd9-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.050432 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44b544d7-e90d-42be-b768-98aecf749387-builder-dockercfg-6qmd9-push" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-push") pod "44b544d7-e90d-42be-b768-98aecf749387" (UID: "44b544d7-e90d-42be-b768-98aecf749387"). InnerVolumeSpecName "builder-dockercfg-6qmd9-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.146243 3561 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/44b544d7-e90d-42be-b768-98aecf749387-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.146528 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/44b544d7-e90d-42be-b768-98aecf749387-builder-dockercfg-6qmd9-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.146654 3561 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.146745 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.146833 3561 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.146920 3561 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.147008 3561 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.147091 3561 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/44b544d7-e90d-42be-b768-98aecf749387-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.147171 3561 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44b544d7-e90d-42be-b768-98aecf749387-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.147256 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/44b544d7-e90d-42be-b768-98aecf749387-builder-dockercfg-6qmd9-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.147358 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/44b544d7-e90d-42be-b768-98aecf749387-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.147447 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ml4r9\" (UniqueName: \"kubernetes.io/projected/44b544d7-e90d-42be-b768-98aecf749387-kube-api-access-ml4r9\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.723058 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_44b544d7-e90d-42be-b768-98aecf749387/docker-build/0.log" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.723987 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.724009 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"44b544d7-e90d-42be-b768-98aecf749387","Type":"ContainerDied","Data":"793f5e4cfdec8460f1bb50f592af7d3a9b7c03130cdf7b57c1f02a3444712b6b"} Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.724047 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="793f5e4cfdec8460f1bb50f592af7d3a9b7c03130cdf7b57c1f02a3444712b6b" Dec 03 00:32:31 crc kubenswrapper[3561]: I1203 00:32:31.726094 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkp2q" event={"ID":"6eff3467-d292-40dc-a81e-90a6334dd439","Type":"ContainerStarted","Data":"aad862e3e19834c423a762ba0779179a118adfae4d6aa993d17c86f25b727369"} Dec 03 00:32:32 crc kubenswrapper[3561]: I1203 00:32:32.733900 3561 generic.go:334] "Generic (PLEG): container finished" podID="6eff3467-d292-40dc-a81e-90a6334dd439" containerID="aad862e3e19834c423a762ba0779179a118adfae4d6aa993d17c86f25b727369" exitCode=0 Dec 03 00:32:32 crc kubenswrapper[3561]: I1203 00:32:32.733955 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkp2q" event={"ID":"6eff3467-d292-40dc-a81e-90a6334dd439","Type":"ContainerDied","Data":"aad862e3e19834c423a762ba0779179a118adfae4d6aa993d17c86f25b727369"} Dec 03 00:32:33 crc kubenswrapper[3561]: I1203 00:32:33.741463 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkp2q" event={"ID":"6eff3467-d292-40dc-a81e-90a6334dd439","Type":"ContainerStarted","Data":"dc4d6fe03d874fe71e30023ebb92d1b7ba590f70c7c26436690a0b8335ebcadc"} Dec 03 00:32:37 crc kubenswrapper[3561]: I1203 00:32:37.411822 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bkp2q" podStartSLOduration=7.067209021 podStartE2EDuration="9.411765107s" podCreationTimestamp="2025-12-03 00:32:28 +0000 UTC" firstStartedPulling="2025-12-03 00:32:30.71706703 +0000 UTC m=+1549.497501288" lastFinishedPulling="2025-12-03 00:32:33.061623126 +0000 UTC m=+1551.842057374" observedRunningTime="2025-12-03 00:32:33.763514364 +0000 UTC m=+1552.543948652" watchObservedRunningTime="2025-12-03 00:32:37.411765107 +0000 UTC m=+1556.192199385" Dec 03 00:32:37 crc kubenswrapper[3561]: I1203 00:32:37.416297 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 03 00:32:37 crc kubenswrapper[3561]: I1203 00:32:37.422405 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 03 00:32:37 crc kubenswrapper[3561]: I1203 00:32:37.672453 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44b544d7-e90d-42be-b768-98aecf749387" path="/var/lib/kubelet/pods/44b544d7-e90d-42be-b768-98aecf749387/volumes" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.031900 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.032082 3561 topology_manager.go:215] "Topology Admit Handler" podUID="cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" podNamespace="service-telemetry" podName="service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: E1203 00:32:39.032347 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="44b544d7-e90d-42be-b768-98aecf749387" containerName="manage-dockerfile" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.032370 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="44b544d7-e90d-42be-b768-98aecf749387" containerName="manage-dockerfile" Dec 03 00:32:39 crc kubenswrapper[3561]: E1203 00:32:39.032395 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="44b544d7-e90d-42be-b768-98aecf749387" containerName="docker-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.032408 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="44b544d7-e90d-42be-b768-98aecf749387" containerName="docker-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.032639 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="44b544d7-e90d-42be-b768-98aecf749387" containerName="docker-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.034661 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.039361 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-ca" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.039410 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-6qmd9" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.039834 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-global-ca" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.040902 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-sys-config" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.056312 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.230866 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.230974 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-builder-dockercfg-6qmd9-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.231173 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.231340 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.231489 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.231604 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.231714 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.231885 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-builder-dockercfg-6qmd9-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.231988 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.232063 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.232133 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.232190 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpwzk\" (UniqueName: \"kubernetes.io/projected/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-kube-api-access-wpwzk\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.303691 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bkp2q" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.303754 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bkp2q" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.455507 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.455701 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.455767 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-builder-dockercfg-6qmd9-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.455811 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.455864 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.455906 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.455965 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wpwzk\" (UniqueName: \"kubernetes.io/projected/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-kube-api-access-wpwzk\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.456054 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.456718 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.456786 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.457101 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.457283 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-builder-dockercfg-6qmd9-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.457337 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.457364 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.457397 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.457638 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.457721 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.458950 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.459454 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.460683 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.474358 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-builder-dockercfg-6qmd9-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.474429 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.480267 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-builder-dockercfg-6qmd9-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.496360 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpwzk\" (UniqueName: \"kubernetes.io/projected/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-kube-api-access-wpwzk\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.548066 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bkp2q" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.659095 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.867503 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.868980 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bkp2q" Dec 03 00:32:39 crc kubenswrapper[3561]: I1203 00:32:39.908726 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bkp2q"] Dec 03 00:32:40 crc kubenswrapper[3561]: I1203 00:32:40.784372 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b","Type":"ContainerStarted","Data":"9297fa21919b43e73360abab182e1fede0c6ed16b5be3577a3880e916b1c852b"} Dec 03 00:32:41 crc kubenswrapper[3561]: I1203 00:32:41.582481 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:32:41 crc kubenswrapper[3561]: I1203 00:32:41.582586 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:32:41 crc kubenswrapper[3561]: I1203 00:32:41.582646 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:32:41 crc kubenswrapper[3561]: I1203 00:32:41.582665 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:32:41 crc kubenswrapper[3561]: I1203 00:32:41.582699 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:32:41 crc kubenswrapper[3561]: I1203 00:32:41.790465 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b","Type":"ContainerStarted","Data":"39d87068cb8c05795ab14b9fe159ef14dffb9705bd062d8d0759555598e59b58"} Dec 03 00:32:41 crc kubenswrapper[3561]: I1203 00:32:41.790633 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bkp2q" podUID="6eff3467-d292-40dc-a81e-90a6334dd439" containerName="registry-server" containerID="cri-o://dc4d6fe03d874fe71e30023ebb92d1b7ba590f70c7c26436690a0b8335ebcadc" gracePeriod=2 Dec 03 00:32:41 crc kubenswrapper[3561]: E1203 00:32:41.886018 3561 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 38.102.83.159:54596->38.102.83.159:39005: write tcp 38.102.83.159:54596->38.102.83.159:39005: write: broken pipe Dec 03 00:32:42 crc kubenswrapper[3561]: I1203 00:32:42.796716 3561 generic.go:334] "Generic (PLEG): container finished" podID="cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" containerID="39d87068cb8c05795ab14b9fe159ef14dffb9705bd062d8d0759555598e59b58" exitCode=0 Dec 03 00:32:42 crc kubenswrapper[3561]: I1203 00:32:42.796765 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b","Type":"ContainerDied","Data":"39d87068cb8c05795ab14b9fe159ef14dffb9705bd062d8d0759555598e59b58"} Dec 03 00:32:43 crc kubenswrapper[3561]: I1203 00:32:43.804712 3561 generic.go:334] "Generic (PLEG): container finished" podID="cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" containerID="dc8a3e6c5c758de6be04281c8c37913aabed70e33b8bb48978119ebd6aa31f3d" exitCode=0 Dec 03 00:32:43 crc kubenswrapper[3561]: I1203 00:32:43.804814 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b","Type":"ContainerDied","Data":"dc8a3e6c5c758de6be04281c8c37913aabed70e33b8bb48978119ebd6aa31f3d"} Dec 03 00:32:43 crc kubenswrapper[3561]: I1203 00:32:43.808070 3561 generic.go:334] "Generic (PLEG): container finished" podID="6eff3467-d292-40dc-a81e-90a6334dd439" containerID="dc4d6fe03d874fe71e30023ebb92d1b7ba590f70c7c26436690a0b8335ebcadc" exitCode=0 Dec 03 00:32:43 crc kubenswrapper[3561]: I1203 00:32:43.808110 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkp2q" event={"ID":"6eff3467-d292-40dc-a81e-90a6334dd439","Type":"ContainerDied","Data":"dc4d6fe03d874fe71e30023ebb92d1b7ba590f70c7c26436690a0b8335ebcadc"} Dec 03 00:32:43 crc kubenswrapper[3561]: I1203 00:32:43.857577 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_cf3c989f-ca9c-4f5e-afc9-a8395ddae07b/manage-dockerfile/0.log" Dec 03 00:32:43 crc kubenswrapper[3561]: I1203 00:32:43.965865 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkp2q" Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.126835 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6p62\" (UniqueName: \"kubernetes.io/projected/6eff3467-d292-40dc-a81e-90a6334dd439-kube-api-access-g6p62\") pod \"6eff3467-d292-40dc-a81e-90a6334dd439\" (UID: \"6eff3467-d292-40dc-a81e-90a6334dd439\") " Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.127025 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eff3467-d292-40dc-a81e-90a6334dd439-catalog-content\") pod \"6eff3467-d292-40dc-a81e-90a6334dd439\" (UID: \"6eff3467-d292-40dc-a81e-90a6334dd439\") " Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.127066 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eff3467-d292-40dc-a81e-90a6334dd439-utilities\") pod \"6eff3467-d292-40dc-a81e-90a6334dd439\" (UID: \"6eff3467-d292-40dc-a81e-90a6334dd439\") " Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.128099 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eff3467-d292-40dc-a81e-90a6334dd439-utilities" (OuterVolumeSpecName: "utilities") pod "6eff3467-d292-40dc-a81e-90a6334dd439" (UID: "6eff3467-d292-40dc-a81e-90a6334dd439"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.132339 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eff3467-d292-40dc-a81e-90a6334dd439-kube-api-access-g6p62" (OuterVolumeSpecName: "kube-api-access-g6p62") pod "6eff3467-d292-40dc-a81e-90a6334dd439" (UID: "6eff3467-d292-40dc-a81e-90a6334dd439"). InnerVolumeSpecName "kube-api-access-g6p62". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.228440 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-g6p62\" (UniqueName: \"kubernetes.io/projected/6eff3467-d292-40dc-a81e-90a6334dd439-kube-api-access-g6p62\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.228483 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eff3467-d292-40dc-a81e-90a6334dd439-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.448284 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eff3467-d292-40dc-a81e-90a6334dd439-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6eff3467-d292-40dc-a81e-90a6334dd439" (UID: "6eff3467-d292-40dc-a81e-90a6334dd439"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.532774 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eff3467-d292-40dc-a81e-90a6334dd439-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.819061 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b","Type":"ContainerStarted","Data":"73d295ed6a505f0c7704e4cf77760a59d7110b9f23f88c51794e6e0831c38501"} Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.825262 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkp2q" event={"ID":"6eff3467-d292-40dc-a81e-90a6334dd439","Type":"ContainerDied","Data":"cc06df6ad20d8470585bf25a89bd57d518e474c0cb666bbcbf5c8d655c08ad9c"} Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.825315 3561 scope.go:117] "RemoveContainer" containerID="dc4d6fe03d874fe71e30023ebb92d1b7ba590f70c7c26436690a0b8335ebcadc" Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.825492 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkp2q" Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.866836 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-bundle-2-build" podStartSLOduration=5.866781982 podStartE2EDuration="5.866781982s" podCreationTimestamp="2025-12-03 00:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:32:44.855047495 +0000 UTC m=+1563.635481793" watchObservedRunningTime="2025-12-03 00:32:44.866781982 +0000 UTC m=+1563.647216250" Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.872604 3561 scope.go:117] "RemoveContainer" containerID="aad862e3e19834c423a762ba0779179a118adfae4d6aa993d17c86f25b727369" Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.895574 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bkp2q"] Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.903298 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bkp2q"] Dec 03 00:32:44 crc kubenswrapper[3561]: I1203 00:32:44.921777 3561 scope.go:117] "RemoveContainer" containerID="796c6427c215e98f4c576fe0a3125b713f330d5e9b0c335abc508c8cceba986f" Dec 03 00:32:45 crc kubenswrapper[3561]: I1203 00:32:45.676766 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eff3467-d292-40dc-a81e-90a6334dd439" path="/var/lib/kubelet/pods/6eff3467-d292-40dc-a81e-90a6334dd439/volumes" Dec 03 00:32:47 crc kubenswrapper[3561]: I1203 00:32:47.851415 3561 generic.go:334] "Generic (PLEG): container finished" podID="cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" containerID="73d295ed6a505f0c7704e4cf77760a59d7110b9f23f88c51794e6e0831c38501" exitCode=0 Dec 03 00:32:47 crc kubenswrapper[3561]: I1203 00:32:47.851484 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b","Type":"ContainerDied","Data":"73d295ed6a505f0c7704e4cf77760a59d7110b9f23f88c51794e6e0831c38501"} Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.121652 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.297648 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-system-configs\") pod \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.297708 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-buildcachedir\") pod \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.297752 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-builder-dockercfg-6qmd9-pull\") pod \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.297802 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-container-storage-run\") pod \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.297850 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-proxy-ca-bundles\") pod \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.297909 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-builder-dockercfg-6qmd9-push\") pod \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.297945 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-blob-cache\") pod \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.297987 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-ca-bundles\") pod \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.298022 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-node-pullsecrets\") pod \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.298061 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpwzk\" (UniqueName: \"kubernetes.io/projected/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-kube-api-access-wpwzk\") pod \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.298091 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-buildworkdir\") pod \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.298124 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-container-storage-root\") pod \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\" (UID: \"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b\") " Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.299633 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" (UID: "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.299650 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" (UID: "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.299984 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" (UID: "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.300052 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" (UID: "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.300051 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" (UID: "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.300651 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" (UID: "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.300944 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" (UID: "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.301121 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" (UID: "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.304064 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-builder-dockercfg-6qmd9-pull" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-pull") pod "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" (UID: "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b"). InnerVolumeSpecName "builder-dockercfg-6qmd9-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.310470 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-kube-api-access-wpwzk" (OuterVolumeSpecName: "kube-api-access-wpwzk") pod "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" (UID: "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b"). InnerVolumeSpecName "kube-api-access-wpwzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.311385 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-builder-dockercfg-6qmd9-push" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-push") pod "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" (UID: "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b"). InnerVolumeSpecName "builder-dockercfg-6qmd9-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.312820 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" (UID: "cf3c989f-ca9c-4f5e-afc9-a8395ddae07b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.399292 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.399350 3561 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.399373 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-builder-dockercfg-6qmd9-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.399396 3561 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.399416 3561 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.399438 3561 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.399458 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wpwzk\" (UniqueName: \"kubernetes.io/projected/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-kube-api-access-wpwzk\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.399477 3561 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.399497 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.399521 3561 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.399570 3561 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.399594 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/cf3c989f-ca9c-4f5e-afc9-a8395ddae07b-builder-dockercfg-6qmd9-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.869446 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"cf3c989f-ca9c-4f5e-afc9-a8395ddae07b","Type":"ContainerDied","Data":"9297fa21919b43e73360abab182e1fede0c6ed16b5be3577a3880e916b1c852b"} Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.870038 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9297fa21919b43e73360abab182e1fede0c6ed16b5be3577a3880e916b1c852b" Dec 03 00:32:49 crc kubenswrapper[3561]: I1203 00:32:49.869597 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.517754 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.518176 3561 topology_manager.go:215] "Topology Admit Handler" podUID="9350b04e-e63d-45e8-8493-47f8b72dd242" podNamespace="service-telemetry" podName="smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: E1203 00:32:53.518335 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" containerName="docker-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.518350 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" containerName="docker-build" Dec 03 00:32:53 crc kubenswrapper[3561]: E1203 00:32:53.518367 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6eff3467-d292-40dc-a81e-90a6334dd439" containerName="registry-server" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.518377 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eff3467-d292-40dc-a81e-90a6334dd439" containerName="registry-server" Dec 03 00:32:53 crc kubenswrapper[3561]: E1203 00:32:53.518391 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6eff3467-d292-40dc-a81e-90a6334dd439" containerName="extract-content" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.518399 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eff3467-d292-40dc-a81e-90a6334dd439" containerName="extract-content" Dec 03 00:32:53 crc kubenswrapper[3561]: E1203 00:32:53.518415 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" containerName="git-clone" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.518423 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" containerName="git-clone" Dec 03 00:32:53 crc kubenswrapper[3561]: E1203 00:32:53.518433 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" containerName="manage-dockerfile" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.518440 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" containerName="manage-dockerfile" Dec 03 00:32:53 crc kubenswrapper[3561]: E1203 00:32:53.518450 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6eff3467-d292-40dc-a81e-90a6334dd439" containerName="extract-utilities" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.518461 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eff3467-d292-40dc-a81e-90a6334dd439" containerName="extract-utilities" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.518606 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eff3467-d292-40dc-a81e-90a6334dd439" containerName="registry-server" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.518627 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf3c989f-ca9c-4f5e-afc9-a8395ddae07b" containerName="docker-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.519362 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: W1203 00:32:53.521890 3561 reflector.go:539] object-"service-telemetry"/"smart-gateway-operator-bundle-1-global-ca": failed to list *v1.ConfigMap: configmaps "smart-gateway-operator-bundle-1-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "service-telemetry": no relationship found between node 'crc' and this object Dec 03 00:32:53 crc kubenswrapper[3561]: E1203 00:32:53.521936 3561 reflector.go:147] object-"service-telemetry"/"smart-gateway-operator-bundle-1-global-ca": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "smart-gateway-operator-bundle-1-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "service-telemetry": no relationship found between node 'crc' and this object Dec 03 00:32:53 crc kubenswrapper[3561]: W1203 00:32:53.522093 3561 reflector.go:539] object-"service-telemetry"/"smart-gateway-operator-bundle-1-sys-config": failed to list *v1.ConfigMap: configmaps "smart-gateway-operator-bundle-1-sys-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "service-telemetry": no relationship found between node 'crc' and this object Dec 03 00:32:53 crc kubenswrapper[3561]: E1203 00:32:53.522117 3561 reflector.go:147] object-"service-telemetry"/"smart-gateway-operator-bundle-1-sys-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "smart-gateway-operator-bundle-1-sys-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "service-telemetry": no relationship found between node 'crc' and this object Dec 03 00:32:53 crc kubenswrapper[3561]: W1203 00:32:53.522155 3561 reflector.go:539] object-"service-telemetry"/"smart-gateway-operator-bundle-1-ca": failed to list *v1.ConfigMap: configmaps "smart-gateway-operator-bundle-1-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "service-telemetry": no relationship found between node 'crc' and this object Dec 03 00:32:53 crc kubenswrapper[3561]: E1203 00:32:53.522169 3561 reflector.go:147] object-"service-telemetry"/"smart-gateway-operator-bundle-1-ca": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "smart-gateway-operator-bundle-1-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "service-telemetry": no relationship found between node 'crc' and this object Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.522238 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-6qmd9" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.534040 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.659655 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9350b04e-e63d-45e8-8493-47f8b72dd242-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.659777 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd6ht\" (UniqueName: \"kubernetes.io/projected/9350b04e-e63d-45e8-8493-47f8b72dd242-kube-api-access-pd6ht\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.659904 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.659950 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/9350b04e-e63d-45e8-8493-47f8b72dd242-builder-dockercfg-6qmd9-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.660086 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.660152 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.660199 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.660249 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.660396 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.660450 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/9350b04e-e63d-45e8-8493-47f8b72dd242-builder-dockercfg-6qmd9-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.660499 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9350b04e-e63d-45e8-8493-47f8b72dd242-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.660699 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.762599 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.762713 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/9350b04e-e63d-45e8-8493-47f8b72dd242-builder-dockercfg-6qmd9-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.762868 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9350b04e-e63d-45e8-8493-47f8b72dd242-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.762946 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.763040 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9350b04e-e63d-45e8-8493-47f8b72dd242-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.763092 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9350b04e-e63d-45e8-8493-47f8b72dd242-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.763192 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pd6ht\" (UniqueName: \"kubernetes.io/projected/9350b04e-e63d-45e8-8493-47f8b72dd242-kube-api-access-pd6ht\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.763265 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.763328 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/9350b04e-e63d-45e8-8493-47f8b72dd242-builder-dockercfg-6qmd9-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.763389 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.763444 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.763489 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.763504 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.763591 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.763843 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9350b04e-e63d-45e8-8493-47f8b72dd242-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.764029 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.764367 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.764436 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.777493 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/9350b04e-e63d-45e8-8493-47f8b72dd242-builder-dockercfg-6qmd9-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.792382 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/9350b04e-e63d-45e8-8493-47f8b72dd242-builder-dockercfg-6qmd9-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:53 crc kubenswrapper[3561]: I1203 00:32:53.801803 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd6ht\" (UniqueName: \"kubernetes.io/projected/9350b04e-e63d-45e8-8493-47f8b72dd242-kube-api-access-pd6ht\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:54 crc kubenswrapper[3561]: I1203 00:32:54.475337 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-global-ca" Dec 03 00:32:54 crc kubenswrapper[3561]: I1203 00:32:54.484771 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:54 crc kubenswrapper[3561]: E1203 00:32:54.763622 3561 configmap.go:199] Couldn't get configMap service-telemetry/smart-gateway-operator-bundle-1-ca: failed to sync configmap cache: timed out waiting for the condition Dec 03 00:32:54 crc kubenswrapper[3561]: E1203 00:32:54.764783 3561 configmap.go:199] Couldn't get configMap service-telemetry/smart-gateway-operator-bundle-1-sys-config: failed to sync configmap cache: timed out waiting for the condition Dec 03 00:32:54 crc kubenswrapper[3561]: E1203 00:32:54.765181 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-ca-bundles podName:9350b04e-e63d-45e8-8493-47f8b72dd242 nodeName:}" failed. No retries permitted until 2025-12-03 00:32:55.263718167 +0000 UTC m=+1574.044152435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "build-ca-bundles" (UniqueName: "kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-ca-bundles") pod "smart-gateway-operator-bundle-1-build" (UID: "9350b04e-e63d-45e8-8493-47f8b72dd242") : failed to sync configmap cache: timed out waiting for the condition Dec 03 00:32:54 crc kubenswrapper[3561]: E1203 00:32:54.765218 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-system-configs podName:9350b04e-e63d-45e8-8493-47f8b72dd242 nodeName:}" failed. No retries permitted until 2025-12-03 00:32:55.265207893 +0000 UTC m=+1574.045642161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "build-system-configs" (UniqueName: "kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-system-configs") pod "smart-gateway-operator-bundle-1-build" (UID: "9350b04e-e63d-45e8-8493-47f8b72dd242") : failed to sync configmap cache: timed out waiting for the condition Dec 03 00:32:54 crc kubenswrapper[3561]: I1203 00:32:54.898685 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-sys-config" Dec 03 00:32:55 crc kubenswrapper[3561]: I1203 00:32:55.008176 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-ca" Dec 03 00:32:55 crc kubenswrapper[3561]: I1203 00:32:55.285291 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:55 crc kubenswrapper[3561]: I1203 00:32:55.285456 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:55 crc kubenswrapper[3561]: I1203 00:32:55.286626 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:55 crc kubenswrapper[3561]: I1203 00:32:55.287411 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:55 crc kubenswrapper[3561]: I1203 00:32:55.335518 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:55 crc kubenswrapper[3561]: I1203 00:32:55.567777 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 03 00:32:55 crc kubenswrapper[3561]: I1203 00:32:55.908864 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"9350b04e-e63d-45e8-8493-47f8b72dd242","Type":"ContainerStarted","Data":"db6814064d5207df8b3a72885db76d3f447c6ed69a4b34e9f8eeb1450c3048d4"} Dec 03 00:32:56 crc kubenswrapper[3561]: I1203 00:32:56.918621 3561 generic.go:334] "Generic (PLEG): container finished" podID="9350b04e-e63d-45e8-8493-47f8b72dd242" containerID="73d94a1838920186ca3b8670cd68181d8b18a4b842d3c692cef1c2e917c56b1a" exitCode=0 Dec 03 00:32:56 crc kubenswrapper[3561]: I1203 00:32:56.918686 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"9350b04e-e63d-45e8-8493-47f8b72dd242","Type":"ContainerDied","Data":"73d94a1838920186ca3b8670cd68181d8b18a4b842d3c692cef1c2e917c56b1a"} Dec 03 00:32:57 crc kubenswrapper[3561]: I1203 00:32:57.623639 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:32:57 crc kubenswrapper[3561]: I1203 00:32:57.623976 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:32:57 crc kubenswrapper[3561]: I1203 00:32:57.927491 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_9350b04e-e63d-45e8-8493-47f8b72dd242/docker-build/0.log" Dec 03 00:32:57 crc kubenswrapper[3561]: I1203 00:32:57.928494 3561 generic.go:334] "Generic (PLEG): container finished" podID="9350b04e-e63d-45e8-8493-47f8b72dd242" containerID="924196d0d9d33c50ca04c59bd3289a354d058628356a3fa44bd052cd9901f8d2" exitCode=1 Dec 03 00:32:57 crc kubenswrapper[3561]: I1203 00:32:57.928565 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"9350b04e-e63d-45e8-8493-47f8b72dd242","Type":"ContainerDied","Data":"924196d0d9d33c50ca04c59bd3289a354d058628356a3fa44bd052cd9901f8d2"} Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.275247 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_9350b04e-e63d-45e8-8493-47f8b72dd242/docker-build/0.log" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.276318 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.443833 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9350b04e-e63d-45e8-8493-47f8b72dd242-node-pullsecrets\") pod \"9350b04e-e63d-45e8-8493-47f8b72dd242\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.443889 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-system-configs\") pod \"9350b04e-e63d-45e8-8493-47f8b72dd242\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.443940 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-build-blob-cache\") pod \"9350b04e-e63d-45e8-8493-47f8b72dd242\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.443981 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9350b04e-e63d-45e8-8493-47f8b72dd242-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9350b04e-e63d-45e8-8493-47f8b72dd242" (UID: "9350b04e-e63d-45e8-8493-47f8b72dd242"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.444005 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-container-storage-root\") pod \"9350b04e-e63d-45e8-8493-47f8b72dd242\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.444154 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-proxy-ca-bundles\") pod \"9350b04e-e63d-45e8-8493-47f8b72dd242\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.444642 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "9350b04e-e63d-45e8-8493-47f8b72dd242" (UID: "9350b04e-e63d-45e8-8493-47f8b72dd242"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.445001 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "9350b04e-e63d-45e8-8493-47f8b72dd242" (UID: "9350b04e-e63d-45e8-8493-47f8b72dd242"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.445118 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9350b04e-e63d-45e8-8493-47f8b72dd242-buildcachedir\") pod \"9350b04e-e63d-45e8-8493-47f8b72dd242\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.445095 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "9350b04e-e63d-45e8-8493-47f8b72dd242" (UID: "9350b04e-e63d-45e8-8493-47f8b72dd242"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.445188 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9350b04e-e63d-45e8-8493-47f8b72dd242-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "9350b04e-e63d-45e8-8493-47f8b72dd242" (UID: "9350b04e-e63d-45e8-8493-47f8b72dd242"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.445278 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/9350b04e-e63d-45e8-8493-47f8b72dd242-builder-dockercfg-6qmd9-pull\") pod \"9350b04e-e63d-45e8-8493-47f8b72dd242\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.445636 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "9350b04e-e63d-45e8-8493-47f8b72dd242" (UID: "9350b04e-e63d-45e8-8493-47f8b72dd242"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.446374 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/9350b04e-e63d-45e8-8493-47f8b72dd242-builder-dockercfg-6qmd9-push\") pod \"9350b04e-e63d-45e8-8493-47f8b72dd242\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.446454 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-buildworkdir\") pod \"9350b04e-e63d-45e8-8493-47f8b72dd242\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.446500 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-ca-bundles\") pod \"9350b04e-e63d-45e8-8493-47f8b72dd242\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.446600 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd6ht\" (UniqueName: \"kubernetes.io/projected/9350b04e-e63d-45e8-8493-47f8b72dd242-kube-api-access-pd6ht\") pod \"9350b04e-e63d-45e8-8493-47f8b72dd242\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.446639 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-container-storage-run\") pod \"9350b04e-e63d-45e8-8493-47f8b72dd242\" (UID: \"9350b04e-e63d-45e8-8493-47f8b72dd242\") " Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.446975 3561 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.446999 3561 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9350b04e-e63d-45e8-8493-47f8b72dd242-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.447020 3561 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.447042 3561 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.447062 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.447081 3561 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9350b04e-e63d-45e8-8493-47f8b72dd242-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.447137 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "9350b04e-e63d-45e8-8493-47f8b72dd242" (UID: "9350b04e-e63d-45e8-8493-47f8b72dd242"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.447623 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "9350b04e-e63d-45e8-8493-47f8b72dd242" (UID: "9350b04e-e63d-45e8-8493-47f8b72dd242"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.448342 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "9350b04e-e63d-45e8-8493-47f8b72dd242" (UID: "9350b04e-e63d-45e8-8493-47f8b72dd242"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.455781 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9350b04e-e63d-45e8-8493-47f8b72dd242-builder-dockercfg-6qmd9-push" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-push") pod "9350b04e-e63d-45e8-8493-47f8b72dd242" (UID: "9350b04e-e63d-45e8-8493-47f8b72dd242"). InnerVolumeSpecName "builder-dockercfg-6qmd9-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.455833 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9350b04e-e63d-45e8-8493-47f8b72dd242-builder-dockercfg-6qmd9-pull" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-pull") pod "9350b04e-e63d-45e8-8493-47f8b72dd242" (UID: "9350b04e-e63d-45e8-8493-47f8b72dd242"). InnerVolumeSpecName "builder-dockercfg-6qmd9-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.455979 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9350b04e-e63d-45e8-8493-47f8b72dd242-kube-api-access-pd6ht" (OuterVolumeSpecName: "kube-api-access-pd6ht") pod "9350b04e-e63d-45e8-8493-47f8b72dd242" (UID: "9350b04e-e63d-45e8-8493-47f8b72dd242"). InnerVolumeSpecName "kube-api-access-pd6ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.548751 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/9350b04e-e63d-45e8-8493-47f8b72dd242-builder-dockercfg-6qmd9-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.548794 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/9350b04e-e63d-45e8-8493-47f8b72dd242-builder-dockercfg-6qmd9-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.548809 3561 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.548819 3561 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9350b04e-e63d-45e8-8493-47f8b72dd242-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.548832 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pd6ht\" (UniqueName: \"kubernetes.io/projected/9350b04e-e63d-45e8-8493-47f8b72dd242-kube-api-access-pd6ht\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.548841 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9350b04e-e63d-45e8-8493-47f8b72dd242-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.945626 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_9350b04e-e63d-45e8-8493-47f8b72dd242/docker-build/0.log" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.946240 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.946249 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"9350b04e-e63d-45e8-8493-47f8b72dd242","Type":"ContainerDied","Data":"db6814064d5207df8b3a72885db76d3f447c6ed69a4b34e9f8eeb1450c3048d4"} Dec 03 00:32:59 crc kubenswrapper[3561]: I1203 00:32:59.946391 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db6814064d5207df8b3a72885db76d3f447c6ed69a4b34e9f8eeb1450c3048d4" Dec 03 00:33:01 crc kubenswrapper[3561]: I1203 00:33:01.548990 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pv2c4"] Dec 03 00:33:01 crc kubenswrapper[3561]: I1203 00:33:01.549459 3561 topology_manager.go:215] "Topology Admit Handler" podUID="c2d47cec-cd3c-4587-9bc0-bea23744ab7e" podNamespace="openshift-marketplace" podName="community-operators-pv2c4" Dec 03 00:33:01 crc kubenswrapper[3561]: E1203 00:33:01.549754 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9350b04e-e63d-45e8-8493-47f8b72dd242" containerName="manage-dockerfile" Dec 03 00:33:01 crc kubenswrapper[3561]: I1203 00:33:01.549774 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="9350b04e-e63d-45e8-8493-47f8b72dd242" containerName="manage-dockerfile" Dec 03 00:33:01 crc kubenswrapper[3561]: E1203 00:33:01.549789 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9350b04e-e63d-45e8-8493-47f8b72dd242" containerName="docker-build" Dec 03 00:33:01 crc kubenswrapper[3561]: I1203 00:33:01.549802 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="9350b04e-e63d-45e8-8493-47f8b72dd242" containerName="docker-build" Dec 03 00:33:01 crc kubenswrapper[3561]: I1203 00:33:01.550011 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="9350b04e-e63d-45e8-8493-47f8b72dd242" containerName="docker-build" Dec 03 00:33:01 crc kubenswrapper[3561]: I1203 00:33:01.551412 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pv2c4" Dec 03 00:33:01 crc kubenswrapper[3561]: I1203 00:33:01.558159 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pv2c4"] Dec 03 00:33:01 crc kubenswrapper[3561]: I1203 00:33:01.680583 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-catalog-content\") pod \"community-operators-pv2c4\" (UID: \"c2d47cec-cd3c-4587-9bc0-bea23744ab7e\") " pod="openshift-marketplace/community-operators-pv2c4" Dec 03 00:33:01 crc kubenswrapper[3561]: I1203 00:33:01.680768 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-utilities\") pod \"community-operators-pv2c4\" (UID: \"c2d47cec-cd3c-4587-9bc0-bea23744ab7e\") " pod="openshift-marketplace/community-operators-pv2c4" Dec 03 00:33:01 crc kubenswrapper[3561]: I1203 00:33:01.680853 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkmkv\" (UniqueName: \"kubernetes.io/projected/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-kube-api-access-hkmkv\") pod \"community-operators-pv2c4\" (UID: \"c2d47cec-cd3c-4587-9bc0-bea23744ab7e\") " pod="openshift-marketplace/community-operators-pv2c4" Dec 03 00:33:01 crc kubenswrapper[3561]: I1203 00:33:01.781813 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-utilities\") pod \"community-operators-pv2c4\" (UID: \"c2d47cec-cd3c-4587-9bc0-bea23744ab7e\") " pod="openshift-marketplace/community-operators-pv2c4" Dec 03 00:33:01 crc kubenswrapper[3561]: I1203 00:33:01.781922 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hkmkv\" (UniqueName: \"kubernetes.io/projected/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-kube-api-access-hkmkv\") pod \"community-operators-pv2c4\" (UID: \"c2d47cec-cd3c-4587-9bc0-bea23744ab7e\") " pod="openshift-marketplace/community-operators-pv2c4" Dec 03 00:33:01 crc kubenswrapper[3561]: I1203 00:33:01.782061 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-catalog-content\") pod \"community-operators-pv2c4\" (UID: \"c2d47cec-cd3c-4587-9bc0-bea23744ab7e\") " pod="openshift-marketplace/community-operators-pv2c4" Dec 03 00:33:01 crc kubenswrapper[3561]: I1203 00:33:01.782530 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-utilities\") pod \"community-operators-pv2c4\" (UID: \"c2d47cec-cd3c-4587-9bc0-bea23744ab7e\") " pod="openshift-marketplace/community-operators-pv2c4" Dec 03 00:33:01 crc kubenswrapper[3561]: I1203 00:33:01.782816 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-catalog-content\") pod \"community-operators-pv2c4\" (UID: \"c2d47cec-cd3c-4587-9bc0-bea23744ab7e\") " pod="openshift-marketplace/community-operators-pv2c4" Dec 03 00:33:01 crc kubenswrapper[3561]: I1203 00:33:01.815172 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkmkv\" (UniqueName: \"kubernetes.io/projected/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-kube-api-access-hkmkv\") pod \"community-operators-pv2c4\" (UID: \"c2d47cec-cd3c-4587-9bc0-bea23744ab7e\") " pod="openshift-marketplace/community-operators-pv2c4" Dec 03 00:33:01 crc kubenswrapper[3561]: I1203 00:33:01.886821 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pv2c4" Dec 03 00:33:02 crc kubenswrapper[3561]: I1203 00:33:02.141319 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pv2c4"] Dec 03 00:33:02 crc kubenswrapper[3561]: I1203 00:33:02.976301 3561 generic.go:334] "Generic (PLEG): container finished" podID="c2d47cec-cd3c-4587-9bc0-bea23744ab7e" containerID="ffba2601ddf15d7cea2431b15822b017d3c5e05771db450c676095d95fae8ead" exitCode=0 Dec 03 00:33:02 crc kubenswrapper[3561]: I1203 00:33:02.976680 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv2c4" event={"ID":"c2d47cec-cd3c-4587-9bc0-bea23744ab7e","Type":"ContainerDied","Data":"ffba2601ddf15d7cea2431b15822b017d3c5e05771db450c676095d95fae8ead"} Dec 03 00:33:02 crc kubenswrapper[3561]: I1203 00:33:02.976712 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv2c4" event={"ID":"c2d47cec-cd3c-4587-9bc0-bea23744ab7e","Type":"ContainerStarted","Data":"62381b07809a306b755e63e062b1d12065bc1ed48291c114e821c646b4de01fb"} Dec 03 00:33:04 crc kubenswrapper[3561]: I1203 00:33:04.469955 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 03 00:33:04 crc kubenswrapper[3561]: I1203 00:33:04.478916 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 03 00:33:04 crc kubenswrapper[3561]: I1203 00:33:04.989762 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv2c4" event={"ID":"c2d47cec-cd3c-4587-9bc0-bea23744ab7e","Type":"ContainerStarted","Data":"8cf0dfa4167d55368ef4612d702cf698b8cafcdcbed1108c2e2a4876ecd73e63"} Dec 03 00:33:05 crc kubenswrapper[3561]: I1203 00:33:05.672618 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9350b04e-e63d-45e8-8493-47f8b72dd242" path="/var/lib/kubelet/pods/9350b04e-e63d-45e8-8493-47f8b72dd242/volumes" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.078089 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.078239 3561 topology_manager.go:215] "Topology Admit Handler" podUID="d75fbae8-e60c-4392-bd47-674eb1077477" podNamespace="service-telemetry" podName="smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.079393 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.082334 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-sys-config" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.082528 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-global-ca" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.082688 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-6qmd9" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.087772 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-ca" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.104134 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.243710 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/d75fbae8-e60c-4392-bd47-674eb1077477-builder-dockercfg-6qmd9-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.243779 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.243950 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.244041 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.244082 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfp2w\" (UniqueName: \"kubernetes.io/projected/d75fbae8-e60c-4392-bd47-674eb1077477-kube-api-access-vfp2w\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.244126 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d75fbae8-e60c-4392-bd47-674eb1077477-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.244149 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d75fbae8-e60c-4392-bd47-674eb1077477-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.244175 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.244200 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.244219 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.244274 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.244298 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/d75fbae8-e60c-4392-bd47-674eb1077477-builder-dockercfg-6qmd9-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.345176 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d75fbae8-e60c-4392-bd47-674eb1077477-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.345246 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d75fbae8-e60c-4392-bd47-674eb1077477-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.345281 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.345313 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.345892 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.345908 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.345418 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d75fbae8-e60c-4392-bd47-674eb1077477-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.345421 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d75fbae8-e60c-4392-bd47-674eb1077477-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.345827 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.346129 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.346170 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/d75fbae8-e60c-4392-bd47-674eb1077477-builder-dockercfg-6qmd9-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.346225 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.346348 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/d75fbae8-e60c-4392-bd47-674eb1077477-builder-dockercfg-6qmd9-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.346388 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.346418 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.346457 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.346500 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vfp2w\" (UniqueName: \"kubernetes.io/projected/d75fbae8-e60c-4392-bd47-674eb1077477-kube-api-access-vfp2w\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.347117 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.347807 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.347947 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.348176 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.356524 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/d75fbae8-e60c-4392-bd47-674eb1077477-builder-dockercfg-6qmd9-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.364893 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/d75fbae8-e60c-4392-bd47-674eb1077477-builder-dockercfg-6qmd9-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.397404 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfp2w\" (UniqueName: \"kubernetes.io/projected/d75fbae8-e60c-4392-bd47-674eb1077477-kube-api-access-vfp2w\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.400180 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:06 crc kubenswrapper[3561]: I1203 00:33:06.597135 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Dec 03 00:33:07 crc kubenswrapper[3561]: I1203 00:33:07.002999 3561 generic.go:334] "Generic (PLEG): container finished" podID="c2d47cec-cd3c-4587-9bc0-bea23744ab7e" containerID="8cf0dfa4167d55368ef4612d702cf698b8cafcdcbed1108c2e2a4876ecd73e63" exitCode=0 Dec 03 00:33:07 crc kubenswrapper[3561]: I1203 00:33:07.003079 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv2c4" event={"ID":"c2d47cec-cd3c-4587-9bc0-bea23744ab7e","Type":"ContainerDied","Data":"8cf0dfa4167d55368ef4612d702cf698b8cafcdcbed1108c2e2a4876ecd73e63"} Dec 03 00:33:07 crc kubenswrapper[3561]: I1203 00:33:07.007245 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"d75fbae8-e60c-4392-bd47-674eb1077477","Type":"ContainerStarted","Data":"52017a9d692a5866fbed2b1cae916d7ff51863913f3c6c87111e0daeacd8ea81"} Dec 03 00:33:07 crc kubenswrapper[3561]: I1203 00:33:07.007272 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"d75fbae8-e60c-4392-bd47-674eb1077477","Type":"ContainerStarted","Data":"c999313306adffac4f326f0bf3d9d436f01089d7054525784f19c631d58ef95d"} Dec 03 00:33:08 crc kubenswrapper[3561]: I1203 00:33:08.013326 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv2c4" event={"ID":"c2d47cec-cd3c-4587-9bc0-bea23744ab7e","Type":"ContainerStarted","Data":"f73def91044eb553b2f6b95d6c2b223a8bb7220af2ac7160662d804f3bd447e3"} Dec 03 00:33:08 crc kubenswrapper[3561]: I1203 00:33:08.015022 3561 generic.go:334] "Generic (PLEG): container finished" podID="d75fbae8-e60c-4392-bd47-674eb1077477" containerID="52017a9d692a5866fbed2b1cae916d7ff51863913f3c6c87111e0daeacd8ea81" exitCode=0 Dec 03 00:33:08 crc kubenswrapper[3561]: I1203 00:33:08.015059 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"d75fbae8-e60c-4392-bd47-674eb1077477","Type":"ContainerDied","Data":"52017a9d692a5866fbed2b1cae916d7ff51863913f3c6c87111e0daeacd8ea81"} Dec 03 00:33:08 crc kubenswrapper[3561]: I1203 00:33:08.044566 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pv2c4" podStartSLOduration=2.6873970270000003 podStartE2EDuration="7.044507346s" podCreationTimestamp="2025-12-03 00:33:01 +0000 UTC" firstStartedPulling="2025-12-03 00:33:02.978927292 +0000 UTC m=+1581.759361560" lastFinishedPulling="2025-12-03 00:33:07.336037611 +0000 UTC m=+1586.116471879" observedRunningTime="2025-12-03 00:33:08.043206466 +0000 UTC m=+1586.823640724" watchObservedRunningTime="2025-12-03 00:33:08.044507346 +0000 UTC m=+1586.824941614" Dec 03 00:33:09 crc kubenswrapper[3561]: I1203 00:33:09.021347 3561 generic.go:334] "Generic (PLEG): container finished" podID="d75fbae8-e60c-4392-bd47-674eb1077477" containerID="39aa92a18492c9768e059c556afe833180668e68ebadf71a28e5b21aef546e3b" exitCode=0 Dec 03 00:33:09 crc kubenswrapper[3561]: I1203 00:33:09.021563 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"d75fbae8-e60c-4392-bd47-674eb1077477","Type":"ContainerDied","Data":"39aa92a18492c9768e059c556afe833180668e68ebadf71a28e5b21aef546e3b"} Dec 03 00:33:09 crc kubenswrapper[3561]: I1203 00:33:09.057261 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_d75fbae8-e60c-4392-bd47-674eb1077477/manage-dockerfile/0.log" Dec 03 00:33:10 crc kubenswrapper[3561]: I1203 00:33:10.032597 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"d75fbae8-e60c-4392-bd47-674eb1077477","Type":"ContainerStarted","Data":"6f9c28f0249ee56325f7b363a408c945d1cf6528b6fc0e4ff7dde00e30487e71"} Dec 03 00:33:11 crc kubenswrapper[3561]: I1203 00:33:11.887071 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pv2c4" Dec 03 00:33:11 crc kubenswrapper[3561]: I1203 00:33:11.887142 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pv2c4" Dec 03 00:33:11 crc kubenswrapper[3561]: I1203 00:33:11.991925 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pv2c4" Dec 03 00:33:12 crc kubenswrapper[3561]: I1203 00:33:12.016917 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-bundle-2-build" podStartSLOduration=6.016865304 podStartE2EDuration="6.016865304s" podCreationTimestamp="2025-12-03 00:33:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:33:10.072389569 +0000 UTC m=+1588.852823827" watchObservedRunningTime="2025-12-03 00:33:12.016865304 +0000 UTC m=+1590.797299582" Dec 03 00:33:12 crc kubenswrapper[3561]: I1203 00:33:12.127382 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pv2c4" Dec 03 00:33:12 crc kubenswrapper[3561]: I1203 00:33:12.171150 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pv2c4"] Dec 03 00:33:14 crc kubenswrapper[3561]: I1203 00:33:14.056921 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pv2c4" podUID="c2d47cec-cd3c-4587-9bc0-bea23744ab7e" containerName="registry-server" containerID="cri-o://f73def91044eb553b2f6b95d6c2b223a8bb7220af2ac7160662d804f3bd447e3" gracePeriod=2 Dec 03 00:33:16 crc kubenswrapper[3561]: I1203 00:33:16.075017 3561 generic.go:334] "Generic (PLEG): container finished" podID="c2d47cec-cd3c-4587-9bc0-bea23744ab7e" containerID="f73def91044eb553b2f6b95d6c2b223a8bb7220af2ac7160662d804f3bd447e3" exitCode=0 Dec 03 00:33:16 crc kubenswrapper[3561]: I1203 00:33:16.075099 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv2c4" event={"ID":"c2d47cec-cd3c-4587-9bc0-bea23744ab7e","Type":"ContainerDied","Data":"f73def91044eb553b2f6b95d6c2b223a8bb7220af2ac7160662d804f3bd447e3"} Dec 03 00:33:16 crc kubenswrapper[3561]: I1203 00:33:16.350362 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pv2c4" Dec 03 00:33:16 crc kubenswrapper[3561]: I1203 00:33:16.490727 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-utilities\") pod \"c2d47cec-cd3c-4587-9bc0-bea23744ab7e\" (UID: \"c2d47cec-cd3c-4587-9bc0-bea23744ab7e\") " Dec 03 00:33:16 crc kubenswrapper[3561]: I1203 00:33:16.490790 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkmkv\" (UniqueName: \"kubernetes.io/projected/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-kube-api-access-hkmkv\") pod \"c2d47cec-cd3c-4587-9bc0-bea23744ab7e\" (UID: \"c2d47cec-cd3c-4587-9bc0-bea23744ab7e\") " Dec 03 00:33:16 crc kubenswrapper[3561]: I1203 00:33:16.490913 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-catalog-content\") pod \"c2d47cec-cd3c-4587-9bc0-bea23744ab7e\" (UID: \"c2d47cec-cd3c-4587-9bc0-bea23744ab7e\") " Dec 03 00:33:16 crc kubenswrapper[3561]: I1203 00:33:16.491692 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-utilities" (OuterVolumeSpecName: "utilities") pod "c2d47cec-cd3c-4587-9bc0-bea23744ab7e" (UID: "c2d47cec-cd3c-4587-9bc0-bea23744ab7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:33:16 crc kubenswrapper[3561]: I1203 00:33:16.499509 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-kube-api-access-hkmkv" (OuterVolumeSpecName: "kube-api-access-hkmkv") pod "c2d47cec-cd3c-4587-9bc0-bea23744ab7e" (UID: "c2d47cec-cd3c-4587-9bc0-bea23744ab7e"). InnerVolumeSpecName "kube-api-access-hkmkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:33:16 crc kubenswrapper[3561]: I1203 00:33:16.592508 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:16 crc kubenswrapper[3561]: I1203 00:33:16.592568 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hkmkv\" (UniqueName: \"kubernetes.io/projected/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-kube-api-access-hkmkv\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:17 crc kubenswrapper[3561]: I1203 00:33:17.083612 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv2c4" event={"ID":"c2d47cec-cd3c-4587-9bc0-bea23744ab7e","Type":"ContainerDied","Data":"62381b07809a306b755e63e062b1d12065bc1ed48291c114e821c646b4de01fb"} Dec 03 00:33:17 crc kubenswrapper[3561]: I1203 00:33:17.083667 3561 scope.go:117] "RemoveContainer" containerID="f73def91044eb553b2f6b95d6c2b223a8bb7220af2ac7160662d804f3bd447e3" Dec 03 00:33:17 crc kubenswrapper[3561]: I1203 00:33:17.083784 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pv2c4" Dec 03 00:33:17 crc kubenswrapper[3561]: I1203 00:33:17.143967 3561 scope.go:117] "RemoveContainer" containerID="8cf0dfa4167d55368ef4612d702cf698b8cafcdcbed1108c2e2a4876ecd73e63" Dec 03 00:33:17 crc kubenswrapper[3561]: I1203 00:33:17.195625 3561 scope.go:117] "RemoveContainer" containerID="ffba2601ddf15d7cea2431b15822b017d3c5e05771db450c676095d95fae8ead" Dec 03 00:33:17 crc kubenswrapper[3561]: I1203 00:33:17.241346 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2d47cec-cd3c-4587-9bc0-bea23744ab7e" (UID: "c2d47cec-cd3c-4587-9bc0-bea23744ab7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:33:17 crc kubenswrapper[3561]: I1203 00:33:17.321401 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2d47cec-cd3c-4587-9bc0-bea23744ab7e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:17 crc kubenswrapper[3561]: I1203 00:33:17.448055 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pv2c4"] Dec 03 00:33:17 crc kubenswrapper[3561]: I1203 00:33:17.456351 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pv2c4"] Dec 03 00:33:17 crc kubenswrapper[3561]: I1203 00:33:17.673753 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2d47cec-cd3c-4587-9bc0-bea23744ab7e" path="/var/lib/kubelet/pods/c2d47cec-cd3c-4587-9bc0-bea23744ab7e/volumes" Dec 03 00:33:19 crc kubenswrapper[3561]: I1203 00:33:19.103226 3561 generic.go:334] "Generic (PLEG): container finished" podID="d75fbae8-e60c-4392-bd47-674eb1077477" containerID="6f9c28f0249ee56325f7b363a408c945d1cf6528b6fc0e4ff7dde00e30487e71" exitCode=0 Dec 03 00:33:19 crc kubenswrapper[3561]: I1203 00:33:19.103286 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"d75fbae8-e60c-4392-bd47-674eb1077477","Type":"ContainerDied","Data":"6f9c28f0249ee56325f7b363a408c945d1cf6528b6fc0e4ff7dde00e30487e71"} Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.425514 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.559937 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-container-storage-run\") pod \"d75fbae8-e60c-4392-bd47-674eb1077477\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.559986 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-ca-bundles\") pod \"d75fbae8-e60c-4392-bd47-674eb1077477\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.560072 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-container-storage-root\") pod \"d75fbae8-e60c-4392-bd47-674eb1077477\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.560912 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "d75fbae8-e60c-4392-bd47-674eb1077477" (UID: "d75fbae8-e60c-4392-bd47-674eb1077477"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.561000 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "d75fbae8-e60c-4392-bd47-674eb1077477" (UID: "d75fbae8-e60c-4392-bd47-674eb1077477"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.561074 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "d75fbae8-e60c-4392-bd47-674eb1077477" (UID: "d75fbae8-e60c-4392-bd47-674eb1077477"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.561228 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-system-configs\") pod \"d75fbae8-e60c-4392-bd47-674eb1077477\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.561314 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d75fbae8-e60c-4392-bd47-674eb1077477-buildcachedir\") pod \"d75fbae8-e60c-4392-bd47-674eb1077477\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.561347 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-buildworkdir\") pod \"d75fbae8-e60c-4392-bd47-674eb1077477\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.561398 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d75fbae8-e60c-4392-bd47-674eb1077477-node-pullsecrets\") pod \"d75fbae8-e60c-4392-bd47-674eb1077477\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.561402 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d75fbae8-e60c-4392-bd47-674eb1077477-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "d75fbae8-e60c-4392-bd47-674eb1077477" (UID: "d75fbae8-e60c-4392-bd47-674eb1077477"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.561423 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-build-blob-cache\") pod \"d75fbae8-e60c-4392-bd47-674eb1077477\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.561456 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-proxy-ca-bundles\") pod \"d75fbae8-e60c-4392-bd47-674eb1077477\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.561495 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/d75fbae8-e60c-4392-bd47-674eb1077477-builder-dockercfg-6qmd9-pull\") pod \"d75fbae8-e60c-4392-bd47-674eb1077477\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.561530 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/d75fbae8-e60c-4392-bd47-674eb1077477-builder-dockercfg-6qmd9-push\") pod \"d75fbae8-e60c-4392-bd47-674eb1077477\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.561575 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfp2w\" (UniqueName: \"kubernetes.io/projected/d75fbae8-e60c-4392-bd47-674eb1077477-kube-api-access-vfp2w\") pod \"d75fbae8-e60c-4392-bd47-674eb1077477\" (UID: \"d75fbae8-e60c-4392-bd47-674eb1077477\") " Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.561924 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.561942 3561 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.561952 3561 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.561966 3561 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d75fbae8-e60c-4392-bd47-674eb1077477-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.561979 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "d75fbae8-e60c-4392-bd47-674eb1077477" (UID: "d75fbae8-e60c-4392-bd47-674eb1077477"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.562305 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "d75fbae8-e60c-4392-bd47-674eb1077477" (UID: "d75fbae8-e60c-4392-bd47-674eb1077477"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.562388 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d75fbae8-e60c-4392-bd47-674eb1077477-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "d75fbae8-e60c-4392-bd47-674eb1077477" (UID: "d75fbae8-e60c-4392-bd47-674eb1077477"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.563412 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "d75fbae8-e60c-4392-bd47-674eb1077477" (UID: "d75fbae8-e60c-4392-bd47-674eb1077477"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.566616 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d75fbae8-e60c-4392-bd47-674eb1077477-kube-api-access-vfp2w" (OuterVolumeSpecName: "kube-api-access-vfp2w") pod "d75fbae8-e60c-4392-bd47-674eb1077477" (UID: "d75fbae8-e60c-4392-bd47-674eb1077477"). InnerVolumeSpecName "kube-api-access-vfp2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.566689 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "d75fbae8-e60c-4392-bd47-674eb1077477" (UID: "d75fbae8-e60c-4392-bd47-674eb1077477"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.567098 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d75fbae8-e60c-4392-bd47-674eb1077477-builder-dockercfg-6qmd9-push" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-push") pod "d75fbae8-e60c-4392-bd47-674eb1077477" (UID: "d75fbae8-e60c-4392-bd47-674eb1077477"). InnerVolumeSpecName "builder-dockercfg-6qmd9-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.568258 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d75fbae8-e60c-4392-bd47-674eb1077477-builder-dockercfg-6qmd9-pull" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-pull") pod "d75fbae8-e60c-4392-bd47-674eb1077477" (UID: "d75fbae8-e60c-4392-bd47-674eb1077477"). InnerVolumeSpecName "builder-dockercfg-6qmd9-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.662683 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.662740 3561 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.662758 3561 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d75fbae8-e60c-4392-bd47-674eb1077477-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.662771 3561 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d75fbae8-e60c-4392-bd47-674eb1077477-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.662785 3561 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d75fbae8-e60c-4392-bd47-674eb1077477-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.662798 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/d75fbae8-e60c-4392-bd47-674eb1077477-builder-dockercfg-6qmd9-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.662811 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/d75fbae8-e60c-4392-bd47-674eb1077477-builder-dockercfg-6qmd9-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:20 crc kubenswrapper[3561]: I1203 00:33:20.662824 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vfp2w\" (UniqueName: \"kubernetes.io/projected/d75fbae8-e60c-4392-bd47-674eb1077477-kube-api-access-vfp2w\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:21 crc kubenswrapper[3561]: I1203 00:33:21.119479 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"d75fbae8-e60c-4392-bd47-674eb1077477","Type":"ContainerDied","Data":"c999313306adffac4f326f0bf3d9d436f01089d7054525784f19c631d58ef95d"} Dec 03 00:33:21 crc kubenswrapper[3561]: I1203 00:33:21.119517 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c999313306adffac4f326f0bf3d9d436f01089d7054525784f19c631d58ef95d" Dec 03 00:33:21 crc kubenswrapper[3561]: I1203 00:33:21.119532 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:27 crc kubenswrapper[3561]: I1203 00:33:27.623927 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:33:27 crc kubenswrapper[3561]: I1203 00:33:27.624728 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.819964 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.820630 3561 topology_manager.go:215] "Topology Admit Handler" podUID="2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" podNamespace="service-telemetry" podName="service-telemetry-framework-index-1-build" Dec 03 00:33:37 crc kubenswrapper[3561]: E1203 00:33:37.820823 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="d75fbae8-e60c-4392-bd47-674eb1077477" containerName="manage-dockerfile" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.820834 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="d75fbae8-e60c-4392-bd47-674eb1077477" containerName="manage-dockerfile" Dec 03 00:33:37 crc kubenswrapper[3561]: E1203 00:33:37.820851 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c2d47cec-cd3c-4587-9bc0-bea23744ab7e" containerName="extract-utilities" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.820858 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d47cec-cd3c-4587-9bc0-bea23744ab7e" containerName="extract-utilities" Dec 03 00:33:37 crc kubenswrapper[3561]: E1203 00:33:37.820871 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c2d47cec-cd3c-4587-9bc0-bea23744ab7e" containerName="registry-server" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.820877 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d47cec-cd3c-4587-9bc0-bea23744ab7e" containerName="registry-server" Dec 03 00:33:37 crc kubenswrapper[3561]: E1203 00:33:37.820886 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="d75fbae8-e60c-4392-bd47-674eb1077477" containerName="git-clone" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.820892 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="d75fbae8-e60c-4392-bd47-674eb1077477" containerName="git-clone" Dec 03 00:33:37 crc kubenswrapper[3561]: E1203 00:33:37.820899 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c2d47cec-cd3c-4587-9bc0-bea23744ab7e" containerName="extract-content" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.820905 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d47cec-cd3c-4587-9bc0-bea23744ab7e" containerName="extract-content" Dec 03 00:33:37 crc kubenswrapper[3561]: E1203 00:33:37.820915 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="d75fbae8-e60c-4392-bd47-674eb1077477" containerName="docker-build" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.820921 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="d75fbae8-e60c-4392-bd47-674eb1077477" containerName="docker-build" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.821032 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="d75fbae8-e60c-4392-bd47-674eb1077477" containerName="docker-build" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.821044 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d47cec-cd3c-4587-9bc0-bea23744ab7e" containerName="registry-server" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.821864 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.824234 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-sys-config" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.825691 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-global-ca" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.825850 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-ca" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.825847 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-6qmd9" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.825869 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.842705 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.920768 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.920852 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-builder-dockercfg-6qmd9-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.920884 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.921084 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.921302 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.921458 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.921522 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.921650 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.921798 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.921837 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-builder-dockercfg-6qmd9-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.921939 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.921992 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:37 crc kubenswrapper[3561]: I1203 00:33:37.922012 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxqxt\" (UniqueName: \"kubernetes.io/projected/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-kube-api-access-rxqxt\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.023755 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.023882 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-builder-dockercfg-6qmd9-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.023934 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.024034 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.024699 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.024777 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.024808 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.024852 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.024871 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.024884 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.024973 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.024982 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.024309 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.025047 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-builder-dockercfg-6qmd9-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.025098 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.025213 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.025292 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.025351 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rxqxt\" (UniqueName: \"kubernetes.io/projected/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-kube-api-access-rxqxt\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.025513 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.025618 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.026098 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.026289 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.031164 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.031584 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-builder-dockercfg-6qmd9-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.031590 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-builder-dockercfg-6qmd9-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.058128 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxqxt\" (UniqueName: \"kubernetes.io/projected/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-kube-api-access-rxqxt\") pod \"service-telemetry-framework-index-1-build\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.145100 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:38 crc kubenswrapper[3561]: I1203 00:33:38.646435 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 03 00:33:39 crc kubenswrapper[3561]: I1203 00:33:39.245241 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f","Type":"ContainerStarted","Data":"51881cb64ebc433ef20b9a3c1cc10fba10a3dcabf8cd0678d06fda52363f4917"} Dec 03 00:33:39 crc kubenswrapper[3561]: I1203 00:33:39.245646 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f","Type":"ContainerStarted","Data":"52da5027b7209479a02065c765efe9ab4884dbd6b9aef150952dd97c24082398"} Dec 03 00:33:40 crc kubenswrapper[3561]: I1203 00:33:40.253253 3561 generic.go:334] "Generic (PLEG): container finished" podID="2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" containerID="51881cb64ebc433ef20b9a3c1cc10fba10a3dcabf8cd0678d06fda52363f4917" exitCode=0 Dec 03 00:33:40 crc kubenswrapper[3561]: I1203 00:33:40.253381 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f","Type":"ContainerDied","Data":"51881cb64ebc433ef20b9a3c1cc10fba10a3dcabf8cd0678d06fda52363f4917"} Dec 03 00:33:41 crc kubenswrapper[3561]: I1203 00:33:41.262851 3561 generic.go:334] "Generic (PLEG): container finished" podID="2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" containerID="7d30a95eae1402ad4cb59d4ca4529ae38d8af55ddff4ce1fed56f070cde8c780" exitCode=0 Dec 03 00:33:41 crc kubenswrapper[3561]: I1203 00:33:41.263207 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f","Type":"ContainerDied","Data":"7d30a95eae1402ad4cb59d4ca4529ae38d8af55ddff4ce1fed56f070cde8c780"} Dec 03 00:33:41 crc kubenswrapper[3561]: I1203 00:33:41.314135 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f/manage-dockerfile/0.log" Dec 03 00:33:41 crc kubenswrapper[3561]: I1203 00:33:41.583226 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:33:41 crc kubenswrapper[3561]: I1203 00:33:41.583295 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:33:41 crc kubenswrapper[3561]: I1203 00:33:41.583347 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:33:41 crc kubenswrapper[3561]: I1203 00:33:41.583370 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:33:41 crc kubenswrapper[3561]: I1203 00:33:41.583402 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:33:42 crc kubenswrapper[3561]: I1203 00:33:42.333687 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f","Type":"ContainerStarted","Data":"3f2de1dca8c16ea3c8836031bd6de2700de420300a25e753e32cc1b749c6f674"} Dec 03 00:33:42 crc kubenswrapper[3561]: I1203 00:33:42.360282 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-index-1-build" podStartSLOduration=5.360225425 podStartE2EDuration="5.360225425s" podCreationTimestamp="2025-12-03 00:33:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:33:42.356157707 +0000 UTC m=+1621.136591965" watchObservedRunningTime="2025-12-03 00:33:42.360225425 +0000 UTC m=+1621.140659693" Dec 03 00:33:57 crc kubenswrapper[3561]: I1203 00:33:57.623212 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:33:57 crc kubenswrapper[3561]: I1203 00:33:57.623871 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:33:57 crc kubenswrapper[3561]: I1203 00:33:57.623919 3561 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:33:57 crc kubenswrapper[3561]: I1203 00:33:57.624754 3561 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:33:57 crc kubenswrapper[3561]: I1203 00:33:57.624984 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" gracePeriod=600 Dec 03 00:34:03 crc kubenswrapper[3561]: I1203 00:34:03.619241 3561 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" exitCode=0 Dec 03 00:34:03 crc kubenswrapper[3561]: I1203 00:34:03.619318 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6"} Dec 03 00:34:03 crc kubenswrapper[3561]: I1203 00:34:03.619864 3561 scope.go:117] "RemoveContainer" containerID="f2fe0358891523ffb3867571645f1222796bef04cd6a75ab1c3e21ae15e72601" Dec 03 00:34:03 crc kubenswrapper[3561]: E1203 00:34:03.702356 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:34:04 crc kubenswrapper[3561]: I1203 00:34:04.627072 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:34:04 crc kubenswrapper[3561]: E1203 00:34:04.628081 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:34:18 crc kubenswrapper[3561]: I1203 00:34:18.664863 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:34:18 crc kubenswrapper[3561]: E1203 00:34:18.666508 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:34:22 crc kubenswrapper[3561]: I1203 00:34:22.740942 3561 generic.go:334] "Generic (PLEG): container finished" podID="2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" containerID="3f2de1dca8c16ea3c8836031bd6de2700de420300a25e753e32cc1b749c6f674" exitCode=0 Dec 03 00:34:22 crc kubenswrapper[3561]: I1203 00:34:22.741030 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f","Type":"ContainerDied","Data":"3f2de1dca8c16ea3c8836031bd6de2700de420300a25e753e32cc1b749c6f674"} Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.072060 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.231145 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-container-storage-run\") pod \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.231235 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-ca-bundles\") pod \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.231288 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-system-configs\") pod \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.231322 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-buildcachedir\") pod \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.231349 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-buildworkdir\") pod \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.231398 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxqxt\" (UniqueName: \"kubernetes.io/projected/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-kube-api-access-rxqxt\") pod \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.231426 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-container-storage-root\") pod \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.231463 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-node-pullsecrets\") pod \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.231472 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" (UID: "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.231502 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-builder-dockercfg-6qmd9-push\") pod \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.231655 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-blob-cache\") pod \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.231664 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" (UID: "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.231733 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.231792 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-proxy-ca-bundles\") pod \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.231836 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-builder-dockercfg-6qmd9-pull\") pod \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\" (UID: \"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f\") " Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.232154 3561 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.232190 3561 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.232775 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" (UID: "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.232833 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" (UID: "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.233717 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" (UID: "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.235231 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" (UID: "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.235426 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" (UID: "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.239494 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-kube-api-access-rxqxt" (OuterVolumeSpecName: "kube-api-access-rxqxt") pod "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" (UID: "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f"). InnerVolumeSpecName "kube-api-access-rxqxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.239677 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-builder-dockercfg-6qmd9-push" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-push") pod "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" (UID: "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f"). InnerVolumeSpecName "builder-dockercfg-6qmd9-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.242303 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" (UID: "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.246684 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-builder-dockercfg-6qmd9-pull" (OuterVolumeSpecName: "builder-dockercfg-6qmd9-pull") pod "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" (UID: "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f"). InnerVolumeSpecName "builder-dockercfg-6qmd9-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.333190 3561 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.333227 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rxqxt\" (UniqueName: \"kubernetes.io/projected/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-kube-api-access-rxqxt\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.333239 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-push\" (UniqueName: \"kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-builder-dockercfg-6qmd9-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.333252 3561 reconciler_common.go:300] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.333262 3561 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.333273 3561 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-6qmd9-pull\" (UniqueName: \"kubernetes.io/secret/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-builder-dockercfg-6qmd9-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.333283 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.333292 3561 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.333302 3561 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.485319 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" (UID: "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.538385 3561 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.760133 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f","Type":"ContainerDied","Data":"52da5027b7209479a02065c765efe9ab4884dbd6b9aef150952dd97c24082398"} Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.760201 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52da5027b7209479a02065c765efe9ab4884dbd6b9aef150952dd97c24082398" Dec 03 00:34:24 crc kubenswrapper[3561]: I1203 00:34:24.760280 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:34:25 crc kubenswrapper[3561]: I1203 00:34:25.719648 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" (UID: "2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:25 crc kubenswrapper[3561]: I1203 00:34:25.753708 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-operators-cpndb"] Dec 03 00:34:25 crc kubenswrapper[3561]: I1203 00:34:25.753852 3561 topology_manager.go:215] "Topology Admit Handler" podUID="054a2ad4-acf1-4a3a-b16c-abd7f88398df" podNamespace="service-telemetry" podName="service-telemetry-framework-operators-cpndb" Dec 03 00:34:25 crc kubenswrapper[3561]: E1203 00:34:25.754034 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" containerName="docker-build" Dec 03 00:34:25 crc kubenswrapper[3561]: I1203 00:34:25.754051 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" containerName="docker-build" Dec 03 00:34:25 crc kubenswrapper[3561]: E1203 00:34:25.754067 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" containerName="manage-dockerfile" Dec 03 00:34:25 crc kubenswrapper[3561]: I1203 00:34:25.754076 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" containerName="manage-dockerfile" Dec 03 00:34:25 crc kubenswrapper[3561]: E1203 00:34:25.754090 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" containerName="git-clone" Dec 03 00:34:25 crc kubenswrapper[3561]: I1203 00:34:25.754097 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" containerName="git-clone" Dec 03 00:34:25 crc kubenswrapper[3561]: I1203 00:34:25.754230 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f" containerName="docker-build" Dec 03 00:34:25 crc kubenswrapper[3561]: I1203 00:34:25.755432 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-cpndb" Dec 03 00:34:25 crc kubenswrapper[3561]: I1203 00:34:25.756752 3561 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2435250e-d7a6-4ab7-ae69-48cd6e6c3c3f-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:25 crc kubenswrapper[3561]: I1203 00:34:25.768942 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-operators-dockercfg-2l4f5" Dec 03 00:34:25 crc kubenswrapper[3561]: I1203 00:34:25.770425 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-cpndb"] Dec 03 00:34:25 crc kubenswrapper[3561]: I1203 00:34:25.858217 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96rwc\" (UniqueName: \"kubernetes.io/projected/054a2ad4-acf1-4a3a-b16c-abd7f88398df-kube-api-access-96rwc\") pod \"service-telemetry-framework-operators-cpndb\" (UID: \"054a2ad4-acf1-4a3a-b16c-abd7f88398df\") " pod="service-telemetry/service-telemetry-framework-operators-cpndb" Dec 03 00:34:25 crc kubenswrapper[3561]: I1203 00:34:25.960108 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-96rwc\" (UniqueName: \"kubernetes.io/projected/054a2ad4-acf1-4a3a-b16c-abd7f88398df-kube-api-access-96rwc\") pod \"service-telemetry-framework-operators-cpndb\" (UID: \"054a2ad4-acf1-4a3a-b16c-abd7f88398df\") " pod="service-telemetry/service-telemetry-framework-operators-cpndb" Dec 03 00:34:25 crc kubenswrapper[3561]: I1203 00:34:25.977790 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-96rwc\" (UniqueName: \"kubernetes.io/projected/054a2ad4-acf1-4a3a-b16c-abd7f88398df-kube-api-access-96rwc\") pod \"service-telemetry-framework-operators-cpndb\" (UID: \"054a2ad4-acf1-4a3a-b16c-abd7f88398df\") " pod="service-telemetry/service-telemetry-framework-operators-cpndb" Dec 03 00:34:26 crc kubenswrapper[3561]: I1203 00:34:26.072888 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-cpndb" Dec 03 00:34:26 crc kubenswrapper[3561]: I1203 00:34:26.301327 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-cpndb"] Dec 03 00:34:26 crc kubenswrapper[3561]: W1203 00:34:26.318820 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod054a2ad4_acf1_4a3a_b16c_abd7f88398df.slice/crio-33403a4e185d32de3a0e7b06ff36d23ddaffbc81d30337b2915aecc67307a8c6 WatchSource:0}: Error finding container 33403a4e185d32de3a0e7b06ff36d23ddaffbc81d30337b2915aecc67307a8c6: Status 404 returned error can't find the container with id 33403a4e185d32de3a0e7b06ff36d23ddaffbc81d30337b2915aecc67307a8c6 Dec 03 00:34:26 crc kubenswrapper[3561]: I1203 00:34:26.772481 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-cpndb" event={"ID":"054a2ad4-acf1-4a3a-b16c-abd7f88398df","Type":"ContainerStarted","Data":"33403a4e185d32de3a0e7b06ff36d23ddaffbc81d30337b2915aecc67307a8c6"} Dec 03 00:34:26 crc kubenswrapper[3561]: I1203 00:34:26.944323 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-cpndb"] Dec 03 00:34:27 crc kubenswrapper[3561]: I1203 00:34:27.154747 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-operators-bgzn4"] Dec 03 00:34:27 crc kubenswrapper[3561]: I1203 00:34:27.154882 3561 topology_manager.go:215] "Topology Admit Handler" podUID="a93e418c-0ceb-4f9e-88de-9f5278e82ffc" podNamespace="service-telemetry" podName="service-telemetry-framework-operators-bgzn4" Dec 03 00:34:27 crc kubenswrapper[3561]: I1203 00:34:27.155636 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-bgzn4" Dec 03 00:34:27 crc kubenswrapper[3561]: I1203 00:34:27.168578 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-bgzn4"] Dec 03 00:34:27 crc kubenswrapper[3561]: I1203 00:34:27.176176 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq55k\" (UniqueName: \"kubernetes.io/projected/a93e418c-0ceb-4f9e-88de-9f5278e82ffc-kube-api-access-gq55k\") pod \"service-telemetry-framework-operators-bgzn4\" (UID: \"a93e418c-0ceb-4f9e-88de-9f5278e82ffc\") " pod="service-telemetry/service-telemetry-framework-operators-bgzn4" Dec 03 00:34:27 crc kubenswrapper[3561]: I1203 00:34:27.277362 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-gq55k\" (UniqueName: \"kubernetes.io/projected/a93e418c-0ceb-4f9e-88de-9f5278e82ffc-kube-api-access-gq55k\") pod \"service-telemetry-framework-operators-bgzn4\" (UID: \"a93e418c-0ceb-4f9e-88de-9f5278e82ffc\") " pod="service-telemetry/service-telemetry-framework-operators-bgzn4" Dec 03 00:34:27 crc kubenswrapper[3561]: I1203 00:34:27.297853 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq55k\" (UniqueName: \"kubernetes.io/projected/a93e418c-0ceb-4f9e-88de-9f5278e82ffc-kube-api-access-gq55k\") pod \"service-telemetry-framework-operators-bgzn4\" (UID: \"a93e418c-0ceb-4f9e-88de-9f5278e82ffc\") " pod="service-telemetry/service-telemetry-framework-operators-bgzn4" Dec 03 00:34:27 crc kubenswrapper[3561]: I1203 00:34:27.509044 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-bgzn4" Dec 03 00:34:27 crc kubenswrapper[3561]: I1203 00:34:27.750306 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-bgzn4"] Dec 03 00:34:27 crc kubenswrapper[3561]: W1203 00:34:27.760638 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda93e418c_0ceb_4f9e_88de_9f5278e82ffc.slice/crio-96f738c7d089169987e267faa4e9c50b98c981ec9d649381adcea05917d3c84e WatchSource:0}: Error finding container 96f738c7d089169987e267faa4e9c50b98c981ec9d649381adcea05917d3c84e: Status 404 returned error can't find the container with id 96f738c7d089169987e267faa4e9c50b98c981ec9d649381adcea05917d3c84e Dec 03 00:34:27 crc kubenswrapper[3561]: I1203 00:34:27.795805 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-bgzn4" event={"ID":"a93e418c-0ceb-4f9e-88de-9f5278e82ffc","Type":"ContainerStarted","Data":"96f738c7d089169987e267faa4e9c50b98c981ec9d649381adcea05917d3c84e"} Dec 03 00:34:29 crc kubenswrapper[3561]: I1203 00:34:29.667636 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:34:29 crc kubenswrapper[3561]: E1203 00:34:29.668249 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:34:36 crc kubenswrapper[3561]: I1203 00:34:36.862578 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-cpndb" event={"ID":"054a2ad4-acf1-4a3a-b16c-abd7f88398df","Type":"ContainerStarted","Data":"25629f728dfe136baac7a8522dd0c1bb01f6f9f40343f426e47d871dda49a11e"} Dec 03 00:34:36 crc kubenswrapper[3561]: I1203 00:34:36.862719 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-operators-cpndb" podUID="054a2ad4-acf1-4a3a-b16c-abd7f88398df" containerName="registry-server" containerID="cri-o://25629f728dfe136baac7a8522dd0c1bb01f6f9f40343f426e47d871dda49a11e" gracePeriod=2 Dec 03 00:34:36 crc kubenswrapper[3561]: I1203 00:34:36.864060 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-bgzn4" event={"ID":"a93e418c-0ceb-4f9e-88de-9f5278e82ffc","Type":"ContainerStarted","Data":"2df5ef319bd7d786f364221a93072ef18986e1ae7fe0584dc9c00bfeed0a70d1"} Dec 03 00:34:36 crc kubenswrapper[3561]: I1203 00:34:36.881261 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-operators-cpndb" podStartSLOduration=1.937389094 podStartE2EDuration="11.881203666s" podCreationTimestamp="2025-12-03 00:34:25 +0000 UTC" firstStartedPulling="2025-12-03 00:34:26.32321219 +0000 UTC m=+1665.103646448" lastFinishedPulling="2025-12-03 00:34:36.267026752 +0000 UTC m=+1675.047461020" observedRunningTime="2025-12-03 00:34:36.879286447 +0000 UTC m=+1675.659720735" watchObservedRunningTime="2025-12-03 00:34:36.881203666 +0000 UTC m=+1675.661637934" Dec 03 00:34:36 crc kubenswrapper[3561]: I1203 00:34:36.901203 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-operators-bgzn4" podStartSLOduration=1.387058267 podStartE2EDuration="9.901135028s" podCreationTimestamp="2025-12-03 00:34:27 +0000 UTC" firstStartedPulling="2025-12-03 00:34:27.762989545 +0000 UTC m=+1666.543423803" lastFinishedPulling="2025-12-03 00:34:36.277066296 +0000 UTC m=+1675.057500564" observedRunningTime="2025-12-03 00:34:36.89674146 +0000 UTC m=+1675.677175718" watchObservedRunningTime="2025-12-03 00:34:36.901135028 +0000 UTC m=+1675.681569326" Dec 03 00:34:37 crc kubenswrapper[3561]: I1203 00:34:37.274997 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-cpndb" Dec 03 00:34:37 crc kubenswrapper[3561]: I1203 00:34:37.404314 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96rwc\" (UniqueName: \"kubernetes.io/projected/054a2ad4-acf1-4a3a-b16c-abd7f88398df-kube-api-access-96rwc\") pod \"054a2ad4-acf1-4a3a-b16c-abd7f88398df\" (UID: \"054a2ad4-acf1-4a3a-b16c-abd7f88398df\") " Dec 03 00:34:37 crc kubenswrapper[3561]: I1203 00:34:37.410030 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/054a2ad4-acf1-4a3a-b16c-abd7f88398df-kube-api-access-96rwc" (OuterVolumeSpecName: "kube-api-access-96rwc") pod "054a2ad4-acf1-4a3a-b16c-abd7f88398df" (UID: "054a2ad4-acf1-4a3a-b16c-abd7f88398df"). InnerVolumeSpecName "kube-api-access-96rwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:34:37 crc kubenswrapper[3561]: I1203 00:34:37.505797 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-96rwc\" (UniqueName: \"kubernetes.io/projected/054a2ad4-acf1-4a3a-b16c-abd7f88398df-kube-api-access-96rwc\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:37 crc kubenswrapper[3561]: I1203 00:34:37.509743 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/service-telemetry-framework-operators-bgzn4" Dec 03 00:34:37 crc kubenswrapper[3561]: I1203 00:34:37.509788 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/service-telemetry-framework-operators-bgzn4" Dec 03 00:34:37 crc kubenswrapper[3561]: I1203 00:34:37.636969 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/service-telemetry-framework-operators-bgzn4" Dec 03 00:34:37 crc kubenswrapper[3561]: I1203 00:34:37.870389 3561 generic.go:334] "Generic (PLEG): container finished" podID="054a2ad4-acf1-4a3a-b16c-abd7f88398df" containerID="25629f728dfe136baac7a8522dd0c1bb01f6f9f40343f426e47d871dda49a11e" exitCode=0 Dec 03 00:34:37 crc kubenswrapper[3561]: I1203 00:34:37.870449 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-cpndb" Dec 03 00:34:37 crc kubenswrapper[3561]: I1203 00:34:37.870463 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-cpndb" event={"ID":"054a2ad4-acf1-4a3a-b16c-abd7f88398df","Type":"ContainerDied","Data":"25629f728dfe136baac7a8522dd0c1bb01f6f9f40343f426e47d871dda49a11e"} Dec 03 00:34:37 crc kubenswrapper[3561]: I1203 00:34:37.870519 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-cpndb" event={"ID":"054a2ad4-acf1-4a3a-b16c-abd7f88398df","Type":"ContainerDied","Data":"33403a4e185d32de3a0e7b06ff36d23ddaffbc81d30337b2915aecc67307a8c6"} Dec 03 00:34:37 crc kubenswrapper[3561]: I1203 00:34:37.870554 3561 scope.go:117] "RemoveContainer" containerID="25629f728dfe136baac7a8522dd0c1bb01f6f9f40343f426e47d871dda49a11e" Dec 03 00:34:37 crc kubenswrapper[3561]: I1203 00:34:37.899418 3561 scope.go:117] "RemoveContainer" containerID="25629f728dfe136baac7a8522dd0c1bb01f6f9f40343f426e47d871dda49a11e" Dec 03 00:34:37 crc kubenswrapper[3561]: E1203 00:34:37.899895 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25629f728dfe136baac7a8522dd0c1bb01f6f9f40343f426e47d871dda49a11e\": container with ID starting with 25629f728dfe136baac7a8522dd0c1bb01f6f9f40343f426e47d871dda49a11e not found: ID does not exist" containerID="25629f728dfe136baac7a8522dd0c1bb01f6f9f40343f426e47d871dda49a11e" Dec 03 00:34:37 crc kubenswrapper[3561]: I1203 00:34:37.899940 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25629f728dfe136baac7a8522dd0c1bb01f6f9f40343f426e47d871dda49a11e"} err="failed to get container status \"25629f728dfe136baac7a8522dd0c1bb01f6f9f40343f426e47d871dda49a11e\": rpc error: code = NotFound desc = could not find container \"25629f728dfe136baac7a8522dd0c1bb01f6f9f40343f426e47d871dda49a11e\": container with ID starting with 25629f728dfe136baac7a8522dd0c1bb01f6f9f40343f426e47d871dda49a11e not found: ID does not exist" Dec 03 00:34:37 crc kubenswrapper[3561]: I1203 00:34:37.913705 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-cpndb"] Dec 03 00:34:37 crc kubenswrapper[3561]: I1203 00:34:37.918372 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-cpndb"] Dec 03 00:34:39 crc kubenswrapper[3561]: I1203 00:34:39.674575 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="054a2ad4-acf1-4a3a-b16c-abd7f88398df" path="/var/lib/kubelet/pods/054a2ad4-acf1-4a3a-b16c-abd7f88398df/volumes" Dec 03 00:34:41 crc kubenswrapper[3561]: I1203 00:34:41.584509 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:34:41 crc kubenswrapper[3561]: I1203 00:34:41.584594 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:34:41 crc kubenswrapper[3561]: I1203 00:34:41.584647 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:34:41 crc kubenswrapper[3561]: I1203 00:34:41.584693 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:34:41 crc kubenswrapper[3561]: I1203 00:34:41.584723 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:34:41 crc kubenswrapper[3561]: I1203 00:34:41.669848 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:34:41 crc kubenswrapper[3561]: E1203 00:34:41.670946 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:34:47 crc kubenswrapper[3561]: I1203 00:34:47.598286 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/service-telemetry-framework-operators-bgzn4" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.287504 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h"] Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.289692 3561 topology_manager.go:215] "Topology Admit Handler" podUID="8fae8149-16c9-4060-b69f-5c923b3dd1f5" podNamespace="service-telemetry" podName="372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" Dec 03 00:34:52 crc kubenswrapper[3561]: E1203 00:34:52.290110 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="054a2ad4-acf1-4a3a-b16c-abd7f88398df" containerName="registry-server" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.290284 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="054a2ad4-acf1-4a3a-b16c-abd7f88398df" containerName="registry-server" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.290764 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="054a2ad4-acf1-4a3a-b16c-abd7f88398df" containerName="registry-server" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.295759 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.303461 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h"] Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.396963 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fae8149-16c9-4060-b69f-5c923b3dd1f5-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h\" (UID: \"8fae8149-16c9-4060-b69f-5c923b3dd1f5\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.397033 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4d9s\" (UniqueName: \"kubernetes.io/projected/8fae8149-16c9-4060-b69f-5c923b3dd1f5-kube-api-access-x4d9s\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h\" (UID: \"8fae8149-16c9-4060-b69f-5c923b3dd1f5\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.397080 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fae8149-16c9-4060-b69f-5c923b3dd1f5-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h\" (UID: \"8fae8149-16c9-4060-b69f-5c923b3dd1f5\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.498748 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fae8149-16c9-4060-b69f-5c923b3dd1f5-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h\" (UID: \"8fae8149-16c9-4060-b69f-5c923b3dd1f5\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.498800 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x4d9s\" (UniqueName: \"kubernetes.io/projected/8fae8149-16c9-4060-b69f-5c923b3dd1f5-kube-api-access-x4d9s\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h\" (UID: \"8fae8149-16c9-4060-b69f-5c923b3dd1f5\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.498834 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fae8149-16c9-4060-b69f-5c923b3dd1f5-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h\" (UID: \"8fae8149-16c9-4060-b69f-5c923b3dd1f5\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.499305 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fae8149-16c9-4060-b69f-5c923b3dd1f5-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h\" (UID: \"8fae8149-16c9-4060-b69f-5c923b3dd1f5\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.499337 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fae8149-16c9-4060-b69f-5c923b3dd1f5-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h\" (UID: \"8fae8149-16c9-4060-b69f-5c923b3dd1f5\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.522519 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4d9s\" (UniqueName: \"kubernetes.io/projected/8fae8149-16c9-4060-b69f-5c923b3dd1f5-kube-api-access-x4d9s\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h\" (UID: \"8fae8149-16c9-4060-b69f-5c923b3dd1f5\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.656366 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.697016 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7"] Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.697159 3561 topology_manager.go:215] "Topology Admit Handler" podUID="bb27c890-f2bb-4974-9ce8-7b57c341c08f" podNamespace="service-telemetry" podName="500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.698408 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.711427 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7"] Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.803019 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb27c890-f2bb-4974-9ce8-7b57c341c08f-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7\" (UID: \"bb27c890-f2bb-4974-9ce8-7b57c341c08f\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.803094 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb27c890-f2bb-4974-9ce8-7b57c341c08f-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7\" (UID: \"bb27c890-f2bb-4974-9ce8-7b57c341c08f\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.803133 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2zvl\" (UniqueName: \"kubernetes.io/projected/bb27c890-f2bb-4974-9ce8-7b57c341c08f-kube-api-access-p2zvl\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7\" (UID: \"bb27c890-f2bb-4974-9ce8-7b57c341c08f\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.904311 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb27c890-f2bb-4974-9ce8-7b57c341c08f-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7\" (UID: \"bb27c890-f2bb-4974-9ce8-7b57c341c08f\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.904372 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb27c890-f2bb-4974-9ce8-7b57c341c08f-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7\" (UID: \"bb27c890-f2bb-4974-9ce8-7b57c341c08f\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.904405 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-p2zvl\" (UniqueName: \"kubernetes.io/projected/bb27c890-f2bb-4974-9ce8-7b57c341c08f-kube-api-access-p2zvl\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7\" (UID: \"bb27c890-f2bb-4974-9ce8-7b57c341c08f\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.905147 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb27c890-f2bb-4974-9ce8-7b57c341c08f-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7\" (UID: \"bb27c890-f2bb-4974-9ce8-7b57c341c08f\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.905503 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb27c890-f2bb-4974-9ce8-7b57c341c08f-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7\" (UID: \"bb27c890-f2bb-4974-9ce8-7b57c341c08f\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.938865 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2zvl\" (UniqueName: \"kubernetes.io/projected/bb27c890-f2bb-4974-9ce8-7b57c341c08f-kube-api-access-p2zvl\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7\" (UID: \"bb27c890-f2bb-4974-9ce8-7b57c341c08f\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.938865 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h"] Dec 03 00:34:52 crc kubenswrapper[3561]: I1203 00:34:52.957663 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" event={"ID":"8fae8149-16c9-4060-b69f-5c923b3dd1f5","Type":"ContainerStarted","Data":"d68ad183a6db4f38ac84969f2b300e2885a9b2fca71142529db150f436ebc6f2"} Dec 03 00:34:53 crc kubenswrapper[3561]: I1203 00:34:53.039115 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" Dec 03 00:34:53 crc kubenswrapper[3561]: I1203 00:34:53.357058 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7"] Dec 03 00:34:53 crc kubenswrapper[3561]: W1203 00:34:53.361432 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb27c890_f2bb_4974_9ce8_7b57c341c08f.slice/crio-dd7f9b565d8388fa1f3528a46d635bd02db625fcf5a0dbc986a37e34ec93fd15 WatchSource:0}: Error finding container dd7f9b565d8388fa1f3528a46d635bd02db625fcf5a0dbc986a37e34ec93fd15: Status 404 returned error can't find the container with id dd7f9b565d8388fa1f3528a46d635bd02db625fcf5a0dbc986a37e34ec93fd15 Dec 03 00:34:53 crc kubenswrapper[3561]: I1203 00:34:53.664435 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:34:53 crc kubenswrapper[3561]: E1203 00:34:53.665066 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:34:53 crc kubenswrapper[3561]: I1203 00:34:53.964450 3561 generic.go:334] "Generic (PLEG): container finished" podID="8fae8149-16c9-4060-b69f-5c923b3dd1f5" containerID="d345899961425bdd419ea208d1d890631c265b811e3ed031e6b16c5b063baeaa" exitCode=0 Dec 03 00:34:53 crc kubenswrapper[3561]: I1203 00:34:53.964486 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" event={"ID":"8fae8149-16c9-4060-b69f-5c923b3dd1f5","Type":"ContainerDied","Data":"d345899961425bdd419ea208d1d890631c265b811e3ed031e6b16c5b063baeaa"} Dec 03 00:34:53 crc kubenswrapper[3561]: I1203 00:34:53.966059 3561 generic.go:334] "Generic (PLEG): container finished" podID="bb27c890-f2bb-4974-9ce8-7b57c341c08f" containerID="6a9a679c51f86ae8f39966189718224b892f5563bd01099c278632b602b5da71" exitCode=0 Dec 03 00:34:53 crc kubenswrapper[3561]: I1203 00:34:53.966089 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" event={"ID":"bb27c890-f2bb-4974-9ce8-7b57c341c08f","Type":"ContainerDied","Data":"6a9a679c51f86ae8f39966189718224b892f5563bd01099c278632b602b5da71"} Dec 03 00:34:53 crc kubenswrapper[3561]: I1203 00:34:53.966108 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" event={"ID":"bb27c890-f2bb-4974-9ce8-7b57c341c08f","Type":"ContainerStarted","Data":"dd7f9b565d8388fa1f3528a46d635bd02db625fcf5a0dbc986a37e34ec93fd15"} Dec 03 00:34:56 crc kubenswrapper[3561]: I1203 00:34:56.009479 3561 generic.go:334] "Generic (PLEG): container finished" podID="8fae8149-16c9-4060-b69f-5c923b3dd1f5" containerID="4d0b005b9cb95c1461c9a747375071bf655edfcd013536edc6dd65c9d3f847b6" exitCode=0 Dec 03 00:34:56 crc kubenswrapper[3561]: I1203 00:34:56.009569 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" event={"ID":"8fae8149-16c9-4060-b69f-5c923b3dd1f5","Type":"ContainerDied","Data":"4d0b005b9cb95c1461c9a747375071bf655edfcd013536edc6dd65c9d3f847b6"} Dec 03 00:34:56 crc kubenswrapper[3561]: I1203 00:34:56.012071 3561 generic.go:334] "Generic (PLEG): container finished" podID="bb27c890-f2bb-4974-9ce8-7b57c341c08f" containerID="3e613cc52073956ea2d6348f99eee584d435c34e556a71ff90aa15c75d00c8b4" exitCode=0 Dec 03 00:34:56 crc kubenswrapper[3561]: I1203 00:34:56.012107 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" event={"ID":"bb27c890-f2bb-4974-9ce8-7b57c341c08f","Type":"ContainerDied","Data":"3e613cc52073956ea2d6348f99eee584d435c34e556a71ff90aa15c75d00c8b4"} Dec 03 00:34:57 crc kubenswrapper[3561]: I1203 00:34:57.024277 3561 generic.go:334] "Generic (PLEG): container finished" podID="8fae8149-16c9-4060-b69f-5c923b3dd1f5" containerID="49eebcdea75f2b3f29a388fec075f52bc66c8e516c72343277fb003787cd3eca" exitCode=0 Dec 03 00:34:57 crc kubenswrapper[3561]: I1203 00:34:57.024309 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" event={"ID":"8fae8149-16c9-4060-b69f-5c923b3dd1f5","Type":"ContainerDied","Data":"49eebcdea75f2b3f29a388fec075f52bc66c8e516c72343277fb003787cd3eca"} Dec 03 00:34:57 crc kubenswrapper[3561]: I1203 00:34:57.027266 3561 generic.go:334] "Generic (PLEG): container finished" podID="bb27c890-f2bb-4974-9ce8-7b57c341c08f" containerID="c1b789dde9ccf5aa29fa72479c260410155ae6125fd605ab3527ae691c44ba11" exitCode=0 Dec 03 00:34:57 crc kubenswrapper[3561]: I1203 00:34:57.027313 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" event={"ID":"bb27c890-f2bb-4974-9ce8-7b57c341c08f","Type":"ContainerDied","Data":"c1b789dde9ccf5aa29fa72479c260410155ae6125fd605ab3527ae691c44ba11"} Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.227479 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.281255 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.419590 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4d9s\" (UniqueName: \"kubernetes.io/projected/8fae8149-16c9-4060-b69f-5c923b3dd1f5-kube-api-access-x4d9s\") pod \"8fae8149-16c9-4060-b69f-5c923b3dd1f5\" (UID: \"8fae8149-16c9-4060-b69f-5c923b3dd1f5\") " Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.419686 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb27c890-f2bb-4974-9ce8-7b57c341c08f-bundle\") pod \"bb27c890-f2bb-4974-9ce8-7b57c341c08f\" (UID: \"bb27c890-f2bb-4974-9ce8-7b57c341c08f\") " Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.419717 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2zvl\" (UniqueName: \"kubernetes.io/projected/bb27c890-f2bb-4974-9ce8-7b57c341c08f-kube-api-access-p2zvl\") pod \"bb27c890-f2bb-4974-9ce8-7b57c341c08f\" (UID: \"bb27c890-f2bb-4974-9ce8-7b57c341c08f\") " Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.419741 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fae8149-16c9-4060-b69f-5c923b3dd1f5-util\") pod \"8fae8149-16c9-4060-b69f-5c923b3dd1f5\" (UID: \"8fae8149-16c9-4060-b69f-5c923b3dd1f5\") " Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.419773 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fae8149-16c9-4060-b69f-5c923b3dd1f5-bundle\") pod \"8fae8149-16c9-4060-b69f-5c923b3dd1f5\" (UID: \"8fae8149-16c9-4060-b69f-5c923b3dd1f5\") " Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.419825 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb27c890-f2bb-4974-9ce8-7b57c341c08f-util\") pod \"bb27c890-f2bb-4974-9ce8-7b57c341c08f\" (UID: \"bb27c890-f2bb-4974-9ce8-7b57c341c08f\") " Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.420914 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fae8149-16c9-4060-b69f-5c923b3dd1f5-bundle" (OuterVolumeSpecName: "bundle") pod "8fae8149-16c9-4060-b69f-5c923b3dd1f5" (UID: "8fae8149-16c9-4060-b69f-5c923b3dd1f5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.421018 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb27c890-f2bb-4974-9ce8-7b57c341c08f-bundle" (OuterVolumeSpecName: "bundle") pod "bb27c890-f2bb-4974-9ce8-7b57c341c08f" (UID: "bb27c890-f2bb-4974-9ce8-7b57c341c08f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.425100 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb27c890-f2bb-4974-9ce8-7b57c341c08f-kube-api-access-p2zvl" (OuterVolumeSpecName: "kube-api-access-p2zvl") pod "bb27c890-f2bb-4974-9ce8-7b57c341c08f" (UID: "bb27c890-f2bb-4974-9ce8-7b57c341c08f"). InnerVolumeSpecName "kube-api-access-p2zvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.425241 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fae8149-16c9-4060-b69f-5c923b3dd1f5-kube-api-access-x4d9s" (OuterVolumeSpecName: "kube-api-access-x4d9s") pod "8fae8149-16c9-4060-b69f-5c923b3dd1f5" (UID: "8fae8149-16c9-4060-b69f-5c923b3dd1f5"). InnerVolumeSpecName "kube-api-access-x4d9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.440483 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fae8149-16c9-4060-b69f-5c923b3dd1f5-util" (OuterVolumeSpecName: "util") pod "8fae8149-16c9-4060-b69f-5c923b3dd1f5" (UID: "8fae8149-16c9-4060-b69f-5c923b3dd1f5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.447195 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb27c890-f2bb-4974-9ce8-7b57c341c08f-util" (OuterVolumeSpecName: "util") pod "bb27c890-f2bb-4974-9ce8-7b57c341c08f" (UID: "bb27c890-f2bb-4974-9ce8-7b57c341c08f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.521406 3561 reconciler_common.go:300] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fae8149-16c9-4060-b69f-5c923b3dd1f5-util\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.521458 3561 reconciler_common.go:300] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fae8149-16c9-4060-b69f-5c923b3dd1f5-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.521471 3561 reconciler_common.go:300] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb27c890-f2bb-4974-9ce8-7b57c341c08f-util\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.521482 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x4d9s\" (UniqueName: \"kubernetes.io/projected/8fae8149-16c9-4060-b69f-5c923b3dd1f5-kube-api-access-x4d9s\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.521492 3561 reconciler_common.go:300] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb27c890-f2bb-4974-9ce8-7b57c341c08f-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:58 crc kubenswrapper[3561]: I1203 00:34:58.521503 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p2zvl\" (UniqueName: \"kubernetes.io/projected/bb27c890-f2bb-4974-9ce8-7b57c341c08f-kube-api-access-p2zvl\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:59 crc kubenswrapper[3561]: I1203 00:34:59.040562 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" Dec 03 00:34:59 crc kubenswrapper[3561]: I1203 00:34:59.040564 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0929l2h" event={"ID":"8fae8149-16c9-4060-b69f-5c923b3dd1f5","Type":"ContainerDied","Data":"d68ad183a6db4f38ac84969f2b300e2885a9b2fca71142529db150f436ebc6f2"} Dec 03 00:34:59 crc kubenswrapper[3561]: I1203 00:34:59.040700 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d68ad183a6db4f38ac84969f2b300e2885a9b2fca71142529db150f436ebc6f2" Dec 03 00:34:59 crc kubenswrapper[3561]: I1203 00:34:59.043100 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" event={"ID":"bb27c890-f2bb-4974-9ce8-7b57c341c08f","Type":"ContainerDied","Data":"dd7f9b565d8388fa1f3528a46d635bd02db625fcf5a0dbc986a37e34ec93fd15"} Dec 03 00:34:59 crc kubenswrapper[3561]: I1203 00:34:59.043131 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd7f9b565d8388fa1f3528a46d635bd02db625fcf5a0dbc986a37e34ec93fd15" Dec 03 00:34:59 crc kubenswrapper[3561]: I1203 00:34:59.043184 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a66pp7" Dec 03 00:35:05 crc kubenswrapper[3561]: I1203 00:35:05.744735 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-5488c6f949-2ndv8"] Dec 03 00:35:05 crc kubenswrapper[3561]: I1203 00:35:05.745320 3561 topology_manager.go:215] "Topology Admit Handler" podUID="91f69ebe-c562-473e-9aa7-959b35cccc55" podNamespace="service-telemetry" podName="smart-gateway-operator-5488c6f949-2ndv8" Dec 03 00:35:05 crc kubenswrapper[3561]: E1203 00:35:05.745489 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8fae8149-16c9-4060-b69f-5c923b3dd1f5" containerName="pull" Dec 03 00:35:05 crc kubenswrapper[3561]: I1203 00:35:05.745500 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fae8149-16c9-4060-b69f-5c923b3dd1f5" containerName="pull" Dec 03 00:35:05 crc kubenswrapper[3561]: E1203 00:35:05.745520 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bb27c890-f2bb-4974-9ce8-7b57c341c08f" containerName="util" Dec 03 00:35:05 crc kubenswrapper[3561]: I1203 00:35:05.745526 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb27c890-f2bb-4974-9ce8-7b57c341c08f" containerName="util" Dec 03 00:35:05 crc kubenswrapper[3561]: E1203 00:35:05.745554 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bb27c890-f2bb-4974-9ce8-7b57c341c08f" containerName="pull" Dec 03 00:35:05 crc kubenswrapper[3561]: I1203 00:35:05.745563 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb27c890-f2bb-4974-9ce8-7b57c341c08f" containerName="pull" Dec 03 00:35:05 crc kubenswrapper[3561]: E1203 00:35:05.745572 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8fae8149-16c9-4060-b69f-5c923b3dd1f5" containerName="extract" Dec 03 00:35:05 crc kubenswrapper[3561]: I1203 00:35:05.745578 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fae8149-16c9-4060-b69f-5c923b3dd1f5" containerName="extract" Dec 03 00:35:05 crc kubenswrapper[3561]: E1203 00:35:05.745587 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8fae8149-16c9-4060-b69f-5c923b3dd1f5" containerName="util" Dec 03 00:35:05 crc kubenswrapper[3561]: I1203 00:35:05.745593 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fae8149-16c9-4060-b69f-5c923b3dd1f5" containerName="util" Dec 03 00:35:05 crc kubenswrapper[3561]: E1203 00:35:05.745603 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bb27c890-f2bb-4974-9ce8-7b57c341c08f" containerName="extract" Dec 03 00:35:05 crc kubenswrapper[3561]: I1203 00:35:05.745609 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb27c890-f2bb-4974-9ce8-7b57c341c08f" containerName="extract" Dec 03 00:35:05 crc kubenswrapper[3561]: I1203 00:35:05.745710 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb27c890-f2bb-4974-9ce8-7b57c341c08f" containerName="extract" Dec 03 00:35:05 crc kubenswrapper[3561]: I1203 00:35:05.745728 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fae8149-16c9-4060-b69f-5c923b3dd1f5" containerName="extract" Dec 03 00:35:05 crc kubenswrapper[3561]: I1203 00:35:05.746173 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5488c6f949-2ndv8" Dec 03 00:35:05 crc kubenswrapper[3561]: I1203 00:35:05.748278 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-q6vqx" Dec 03 00:35:05 crc kubenswrapper[3561]: I1203 00:35:05.759956 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5488c6f949-2ndv8"] Dec 03 00:35:05 crc kubenswrapper[3561]: I1203 00:35:05.908758 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/91f69ebe-c562-473e-9aa7-959b35cccc55-runner\") pod \"smart-gateway-operator-5488c6f949-2ndv8\" (UID: \"91f69ebe-c562-473e-9aa7-959b35cccc55\") " pod="service-telemetry/smart-gateway-operator-5488c6f949-2ndv8" Dec 03 00:35:05 crc kubenswrapper[3561]: I1203 00:35:05.908860 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxmzk\" (UniqueName: \"kubernetes.io/projected/91f69ebe-c562-473e-9aa7-959b35cccc55-kube-api-access-bxmzk\") pod \"smart-gateway-operator-5488c6f949-2ndv8\" (UID: \"91f69ebe-c562-473e-9aa7-959b35cccc55\") " pod="service-telemetry/smart-gateway-operator-5488c6f949-2ndv8" Dec 03 00:35:06 crc kubenswrapper[3561]: I1203 00:35:06.010598 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/91f69ebe-c562-473e-9aa7-959b35cccc55-runner\") pod \"smart-gateway-operator-5488c6f949-2ndv8\" (UID: \"91f69ebe-c562-473e-9aa7-959b35cccc55\") " pod="service-telemetry/smart-gateway-operator-5488c6f949-2ndv8" Dec 03 00:35:06 crc kubenswrapper[3561]: I1203 00:35:06.009923 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/91f69ebe-c562-473e-9aa7-959b35cccc55-runner\") pod \"smart-gateway-operator-5488c6f949-2ndv8\" (UID: \"91f69ebe-c562-473e-9aa7-959b35cccc55\") " pod="service-telemetry/smart-gateway-operator-5488c6f949-2ndv8" Dec 03 00:35:06 crc kubenswrapper[3561]: I1203 00:35:06.010790 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bxmzk\" (UniqueName: \"kubernetes.io/projected/91f69ebe-c562-473e-9aa7-959b35cccc55-kube-api-access-bxmzk\") pod \"smart-gateway-operator-5488c6f949-2ndv8\" (UID: \"91f69ebe-c562-473e-9aa7-959b35cccc55\") " pod="service-telemetry/smart-gateway-operator-5488c6f949-2ndv8" Dec 03 00:35:06 crc kubenswrapper[3561]: I1203 00:35:06.041941 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxmzk\" (UniqueName: \"kubernetes.io/projected/91f69ebe-c562-473e-9aa7-959b35cccc55-kube-api-access-bxmzk\") pod \"smart-gateway-operator-5488c6f949-2ndv8\" (UID: \"91f69ebe-c562-473e-9aa7-959b35cccc55\") " pod="service-telemetry/smart-gateway-operator-5488c6f949-2ndv8" Dec 03 00:35:06 crc kubenswrapper[3561]: I1203 00:35:06.061667 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5488c6f949-2ndv8" Dec 03 00:35:06 crc kubenswrapper[3561]: I1203 00:35:06.290720 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5488c6f949-2ndv8"] Dec 03 00:35:06 crc kubenswrapper[3561]: W1203 00:35:06.296877 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91f69ebe_c562_473e_9aa7_959b35cccc55.slice/crio-644ecf564a7fa8b5f699f095195cbdb2c3f1c9007604f6293020ecfc528ce070 WatchSource:0}: Error finding container 644ecf564a7fa8b5f699f095195cbdb2c3f1c9007604f6293020ecfc528ce070: Status 404 returned error can't find the container with id 644ecf564a7fa8b5f699f095195cbdb2c3f1c9007604f6293020ecfc528ce070 Dec 03 00:35:07 crc kubenswrapper[3561]: I1203 00:35:07.143151 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5488c6f949-2ndv8" event={"ID":"91f69ebe-c562-473e-9aa7-959b35cccc55","Type":"ContainerStarted","Data":"644ecf564a7fa8b5f699f095195cbdb2c3f1c9007604f6293020ecfc528ce070"} Dec 03 00:35:08 crc kubenswrapper[3561]: I1203 00:35:08.500060 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-6755fcb848-kqnxv"] Dec 03 00:35:08 crc kubenswrapper[3561]: I1203 00:35:08.500481 3561 topology_manager.go:215] "Topology Admit Handler" podUID="2618f9bf-b6a4-4371-8f7c-685c9682054a" podNamespace="service-telemetry" podName="service-telemetry-operator-6755fcb848-kqnxv" Dec 03 00:35:08 crc kubenswrapper[3561]: I1203 00:35:08.501383 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-6755fcb848-kqnxv" Dec 03 00:35:08 crc kubenswrapper[3561]: I1203 00:35:08.504274 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-l4kxk" Dec 03 00:35:08 crc kubenswrapper[3561]: I1203 00:35:08.518757 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-6755fcb848-kqnxv"] Dec 03 00:35:08 crc kubenswrapper[3561]: I1203 00:35:08.647654 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twzww\" (UniqueName: \"kubernetes.io/projected/2618f9bf-b6a4-4371-8f7c-685c9682054a-kube-api-access-twzww\") pod \"service-telemetry-operator-6755fcb848-kqnxv\" (UID: \"2618f9bf-b6a4-4371-8f7c-685c9682054a\") " pod="service-telemetry/service-telemetry-operator-6755fcb848-kqnxv" Dec 03 00:35:08 crc kubenswrapper[3561]: I1203 00:35:08.647973 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/2618f9bf-b6a4-4371-8f7c-685c9682054a-runner\") pod \"service-telemetry-operator-6755fcb848-kqnxv\" (UID: \"2618f9bf-b6a4-4371-8f7c-685c9682054a\") " pod="service-telemetry/service-telemetry-operator-6755fcb848-kqnxv" Dec 03 00:35:08 crc kubenswrapper[3561]: I1203 00:35:08.664816 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:35:08 crc kubenswrapper[3561]: E1203 00:35:08.665676 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:35:08 crc kubenswrapper[3561]: I1203 00:35:08.748931 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-twzww\" (UniqueName: \"kubernetes.io/projected/2618f9bf-b6a4-4371-8f7c-685c9682054a-kube-api-access-twzww\") pod \"service-telemetry-operator-6755fcb848-kqnxv\" (UID: \"2618f9bf-b6a4-4371-8f7c-685c9682054a\") " pod="service-telemetry/service-telemetry-operator-6755fcb848-kqnxv" Dec 03 00:35:08 crc kubenswrapper[3561]: I1203 00:35:08.749015 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/2618f9bf-b6a4-4371-8f7c-685c9682054a-runner\") pod \"service-telemetry-operator-6755fcb848-kqnxv\" (UID: \"2618f9bf-b6a4-4371-8f7c-685c9682054a\") " pod="service-telemetry/service-telemetry-operator-6755fcb848-kqnxv" Dec 03 00:35:08 crc kubenswrapper[3561]: I1203 00:35:08.749938 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/2618f9bf-b6a4-4371-8f7c-685c9682054a-runner\") pod \"service-telemetry-operator-6755fcb848-kqnxv\" (UID: \"2618f9bf-b6a4-4371-8f7c-685c9682054a\") " pod="service-telemetry/service-telemetry-operator-6755fcb848-kqnxv" Dec 03 00:35:08 crc kubenswrapper[3561]: I1203 00:35:08.771828 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-twzww\" (UniqueName: \"kubernetes.io/projected/2618f9bf-b6a4-4371-8f7c-685c9682054a-kube-api-access-twzww\") pod \"service-telemetry-operator-6755fcb848-kqnxv\" (UID: \"2618f9bf-b6a4-4371-8f7c-685c9682054a\") " pod="service-telemetry/service-telemetry-operator-6755fcb848-kqnxv" Dec 03 00:35:08 crc kubenswrapper[3561]: I1203 00:35:08.816664 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-6755fcb848-kqnxv" Dec 03 00:35:09 crc kubenswrapper[3561]: I1203 00:35:09.047069 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-6755fcb848-kqnxv"] Dec 03 00:35:09 crc kubenswrapper[3561]: W1203 00:35:09.060977 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2618f9bf_b6a4_4371_8f7c_685c9682054a.slice/crio-708349187ceb43ee4db946a2055c5c2657e2fc41ae8c7503799834c9cdb23252 WatchSource:0}: Error finding container 708349187ceb43ee4db946a2055c5c2657e2fc41ae8c7503799834c9cdb23252: Status 404 returned error can't find the container with id 708349187ceb43ee4db946a2055c5c2657e2fc41ae8c7503799834c9cdb23252 Dec 03 00:35:09 crc kubenswrapper[3561]: I1203 00:35:09.158332 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-6755fcb848-kqnxv" event={"ID":"2618f9bf-b6a4-4371-8f7c-685c9682054a","Type":"ContainerStarted","Data":"708349187ceb43ee4db946a2055c5c2657e2fc41ae8c7503799834c9cdb23252"} Dec 03 00:35:23 crc kubenswrapper[3561]: I1203 00:35:23.664554 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:35:23 crc kubenswrapper[3561]: E1203 00:35:23.665599 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:35:28 crc kubenswrapper[3561]: I1203 00:35:28.342913 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-6755fcb848-kqnxv" event={"ID":"2618f9bf-b6a4-4371-8f7c-685c9682054a","Type":"ContainerStarted","Data":"c0e751f341b03ec77f8b19be6934c3108711fb89e3dee3177835065881a400ae"} Dec 03 00:35:28 crc kubenswrapper[3561]: I1203 00:35:28.345269 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5488c6f949-2ndv8" event={"ID":"91f69ebe-c562-473e-9aa7-959b35cccc55","Type":"ContainerStarted","Data":"90ea69e3a553d4294142ed7809ef740b7d11b0b5a4361539886929d5e3698d29"} Dec 03 00:35:28 crc kubenswrapper[3561]: I1203 00:35:28.365848 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-6755fcb848-kqnxv" podStartSLOduration=1.5396205410000001 podStartE2EDuration="20.365805686s" podCreationTimestamp="2025-12-03 00:35:08 +0000 UTC" firstStartedPulling="2025-12-03 00:35:09.063766863 +0000 UTC m=+1707.844201131" lastFinishedPulling="2025-12-03 00:35:27.889952018 +0000 UTC m=+1726.670386276" observedRunningTime="2025-12-03 00:35:28.363018909 +0000 UTC m=+1727.143453167" watchObservedRunningTime="2025-12-03 00:35:28.365805686 +0000 UTC m=+1727.146239944" Dec 03 00:35:28 crc kubenswrapper[3561]: I1203 00:35:28.387020 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-5488c6f949-2ndv8" podStartSLOduration=1.8260523100000001 podStartE2EDuration="23.386966166s" podCreationTimestamp="2025-12-03 00:35:05 +0000 UTC" firstStartedPulling="2025-12-03 00:35:06.298579672 +0000 UTC m=+1705.079013930" lastFinishedPulling="2025-12-03 00:35:27.859493528 +0000 UTC m=+1726.639927786" observedRunningTime="2025-12-03 00:35:28.379202184 +0000 UTC m=+1727.159636472" watchObservedRunningTime="2025-12-03 00:35:28.386966166 +0000 UTC m=+1727.167400444" Dec 03 00:35:35 crc kubenswrapper[3561]: I1203 00:35:35.668783 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:35:35 crc kubenswrapper[3561]: E1203 00:35:35.669896 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:35:41 crc kubenswrapper[3561]: I1203 00:35:41.585171 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:35:41 crc kubenswrapper[3561]: I1203 00:35:41.585613 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:35:41 crc kubenswrapper[3561]: I1203 00:35:41.585645 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:35:41 crc kubenswrapper[3561]: I1203 00:35:41.585688 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:35:41 crc kubenswrapper[3561]: I1203 00:35:41.585708 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.664409 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:35:50 crc kubenswrapper[3561]: E1203 00:35:50.665514 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.829666 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-z7qbr"] Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.829788 3561 topology_manager.go:215] "Topology Admit Handler" podUID="0bc2b7e8-1108-42e8-8c78-b1653bb13d2e" podNamespace="service-telemetry" podName="default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.830471 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.832888 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.833650 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.833651 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.834447 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.834567 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.836922 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.848515 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-z7qbr"] Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.987370 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.987421 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.987450 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.987527 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-sasl-config\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.987659 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knw8b\" (UniqueName: \"kubernetes.io/projected/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-kube-api-access-knw8b\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.987707 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:50 crc kubenswrapper[3561]: I1203 00:35:50.987744 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-sasl-users\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.088927 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-sasl-users\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.089010 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.089036 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.089065 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.089085 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-sasl-config\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.089118 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-knw8b\" (UniqueName: \"kubernetes.io/projected/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-kube-api-access-knw8b\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.089148 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.090159 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-sasl-config\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.094815 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.095434 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.095623 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.097095 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-sasl-users\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.101309 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.109190 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-knw8b\" (UniqueName: \"kubernetes.io/projected/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-kube-api-access-knw8b\") pod \"default-interconnect-84dbc59cb8-z7qbr\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.148657 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.450699 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-z7qbr"] Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.462034 3561 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 00:35:51 crc kubenswrapper[3561]: I1203 00:35:51.472830 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" event={"ID":"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e","Type":"ContainerStarted","Data":"8dd0302b49f5a476beb3c67211c5a679968917acc6e8673697c1200badd5d66e"} Dec 03 00:35:57 crc kubenswrapper[3561]: I1203 00:35:57.519283 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" event={"ID":"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e","Type":"ContainerStarted","Data":"a47d5adb571dcb524f5c81374ae3823b3ae3352fa6df9230d0c7a9e298285d02"} Dec 03 00:35:57 crc kubenswrapper[3561]: I1203 00:35:57.557350 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" podStartSLOduration=1.987198764 podStartE2EDuration="7.557294407s" podCreationTimestamp="2025-12-03 00:35:50 +0000 UTC" firstStartedPulling="2025-12-03 00:35:51.461983074 +0000 UTC m=+1750.242417332" lastFinishedPulling="2025-12-03 00:35:57.032078717 +0000 UTC m=+1755.812512975" observedRunningTime="2025-12-03 00:35:57.533374981 +0000 UTC m=+1756.313809249" watchObservedRunningTime="2025-12-03 00:35:57.557294407 +0000 UTC m=+1756.337728685" Dec 03 00:36:01 crc kubenswrapper[3561]: I1203 00:36:01.851194 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 03 00:36:01 crc kubenswrapper[3561]: I1203 00:36:01.851764 3561 topology_manager.go:215] "Topology Admit Handler" podUID="782f2197-f3e1-4ea1-988f-7acbd394c9e8" podNamespace="service-telemetry" podName="prometheus-default-0" Dec 03 00:36:01 crc kubenswrapper[3561]: I1203 00:36:01.853288 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 03 00:36:01 crc kubenswrapper[3561]: I1203 00:36:01.857312 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Dec 03 00:36:01 crc kubenswrapper[3561]: I1203 00:36:01.857370 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Dec 03 00:36:01 crc kubenswrapper[3561]: I1203 00:36:01.857824 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Dec 03 00:36:01 crc kubenswrapper[3561]: I1203 00:36:01.858172 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-lx86k" Dec 03 00:36:01 crc kubenswrapper[3561]: I1203 00:36:01.858439 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Dec 03 00:36:01 crc kubenswrapper[3561]: I1203 00:36:01.858493 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Dec 03 00:36:01 crc kubenswrapper[3561]: I1203 00:36:01.858658 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Dec 03 00:36:01 crc kubenswrapper[3561]: I1203 00:36:01.859742 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Dec 03 00:36:01 crc kubenswrapper[3561]: I1203 00:36:01.869816 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.028608 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/782f2197-f3e1-4ea1-988f-7acbd394c9e8-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.028659 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/782f2197-f3e1-4ea1-988f-7acbd394c9e8-config-out\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.028681 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/782f2197-f3e1-4ea1-988f-7acbd394c9e8-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.028705 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-web-config\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.029069 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.029242 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.029319 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/782f2197-f3e1-4ea1-988f-7acbd394c9e8-tls-assets\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.029368 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-config\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.029471 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-aa28afbc-c735-4185-8e9e-f2ecb80c9613\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa28afbc-c735-4185-8e9e-f2ecb80c9613\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.029610 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns6c7\" (UniqueName: \"kubernetes.io/projected/782f2197-f3e1-4ea1-988f-7acbd394c9e8-kube-api-access-ns6c7\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.130975 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.131058 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.131089 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/782f2197-f3e1-4ea1-988f-7acbd394c9e8-tls-assets\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.131112 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-config\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.131185 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-aa28afbc-c735-4185-8e9e-f2ecb80c9613\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa28afbc-c735-4185-8e9e-f2ecb80c9613\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.131226 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ns6c7\" (UniqueName: \"kubernetes.io/projected/782f2197-f3e1-4ea1-988f-7acbd394c9e8-kube-api-access-ns6c7\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.131259 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/782f2197-f3e1-4ea1-988f-7acbd394c9e8-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.131278 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/782f2197-f3e1-4ea1-988f-7acbd394c9e8-config-out\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.131297 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/782f2197-f3e1-4ea1-988f-7acbd394c9e8-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.131327 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-web-config\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: E1203 00:36:02.132260 3561 secret.go:194] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 03 00:36:02 crc kubenswrapper[3561]: E1203 00:36:02.132378 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-secret-default-prometheus-proxy-tls podName:782f2197-f3e1-4ea1-988f-7acbd394c9e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:36:02.632347709 +0000 UTC m=+1761.412781977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "782f2197-f3e1-4ea1-988f-7acbd394c9e8") : secret "default-prometheus-proxy-tls" not found Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.132503 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/782f2197-f3e1-4ea1-988f-7acbd394c9e8-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.133019 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/782f2197-f3e1-4ea1-988f-7acbd394c9e8-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.136973 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/782f2197-f3e1-4ea1-988f-7acbd394c9e8-config-out\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.139103 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/782f2197-f3e1-4ea1-988f-7acbd394c9e8-tls-assets\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.140573 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-config\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.141274 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.142005 3561 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.142049 3561 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-aa28afbc-c735-4185-8e9e-f2ecb80c9613\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa28afbc-c735-4185-8e9e-f2ecb80c9613\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/02dcc673f1757b7ae73f482f8f4e46e0af8f477885c9e45b49ea71503aac9bb9/globalmount\"" pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.161807 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-web-config\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.180682 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns6c7\" (UniqueName: \"kubernetes.io/projected/782f2197-f3e1-4ea1-988f-7acbd394c9e8-kube-api-access-ns6c7\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.197471 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-aa28afbc-c735-4185-8e9e-f2ecb80c9613\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa28afbc-c735-4185-8e9e-f2ecb80c9613\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: I1203 00:36:02.636523 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:02 crc kubenswrapper[3561]: E1203 00:36:02.636674 3561 secret.go:194] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 03 00:36:02 crc kubenswrapper[3561]: E1203 00:36:02.636763 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-secret-default-prometheus-proxy-tls podName:782f2197-f3e1-4ea1-988f-7acbd394c9e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:36:03.63674335 +0000 UTC m=+1762.417177608 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "782f2197-f3e1-4ea1-988f-7acbd394c9e8") : secret "default-prometheus-proxy-tls" not found Dec 03 00:36:03 crc kubenswrapper[3561]: I1203 00:36:03.648848 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:03 crc kubenswrapper[3561]: I1203 00:36:03.653211 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/782f2197-f3e1-4ea1-988f-7acbd394c9e8-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"782f2197-f3e1-4ea1-988f-7acbd394c9e8\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:03 crc kubenswrapper[3561]: I1203 00:36:03.777836 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 03 00:36:03 crc kubenswrapper[3561]: I1203 00:36:03.982357 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 03 00:36:04 crc kubenswrapper[3561]: I1203 00:36:04.560912 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"782f2197-f3e1-4ea1-988f-7acbd394c9e8","Type":"ContainerStarted","Data":"f8b5346c202547473b8d780da0fd21dad1503b3f4ed5671339636ae13d275fdd"} Dec 03 00:36:05 crc kubenswrapper[3561]: I1203 00:36:05.665353 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:36:05 crc kubenswrapper[3561]: E1203 00:36:05.668985 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:36:11 crc kubenswrapper[3561]: I1203 00:36:11.736396 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6755fc87b7-lptzn"] Dec 03 00:36:11 crc kubenswrapper[3561]: I1203 00:36:11.738390 3561 topology_manager.go:215] "Topology Admit Handler" podUID="b2b2911c-9415-4aac-a85d-b29266af55c2" podNamespace="service-telemetry" podName="default-snmp-webhook-6755fc87b7-lptzn" Dec 03 00:36:11 crc kubenswrapper[3561]: I1203 00:36:11.739750 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6755fc87b7-lptzn" Dec 03 00:36:11 crc kubenswrapper[3561]: I1203 00:36:11.748314 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6755fc87b7-lptzn"] Dec 03 00:36:11 crc kubenswrapper[3561]: I1203 00:36:11.840730 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fllv\" (UniqueName: \"kubernetes.io/projected/b2b2911c-9415-4aac-a85d-b29266af55c2-kube-api-access-6fllv\") pod \"default-snmp-webhook-6755fc87b7-lptzn\" (UID: \"b2b2911c-9415-4aac-a85d-b29266af55c2\") " pod="service-telemetry/default-snmp-webhook-6755fc87b7-lptzn" Dec 03 00:36:11 crc kubenswrapper[3561]: I1203 00:36:11.941932 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6fllv\" (UniqueName: \"kubernetes.io/projected/b2b2911c-9415-4aac-a85d-b29266af55c2-kube-api-access-6fllv\") pod \"default-snmp-webhook-6755fc87b7-lptzn\" (UID: \"b2b2911c-9415-4aac-a85d-b29266af55c2\") " pod="service-telemetry/default-snmp-webhook-6755fc87b7-lptzn" Dec 03 00:36:11 crc kubenswrapper[3561]: I1203 00:36:11.961918 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fllv\" (UniqueName: \"kubernetes.io/projected/b2b2911c-9415-4aac-a85d-b29266af55c2-kube-api-access-6fllv\") pod \"default-snmp-webhook-6755fc87b7-lptzn\" (UID: \"b2b2911c-9415-4aac-a85d-b29266af55c2\") " pod="service-telemetry/default-snmp-webhook-6755fc87b7-lptzn" Dec 03 00:36:12 crc kubenswrapper[3561]: I1203 00:36:12.054059 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6755fc87b7-lptzn" Dec 03 00:36:12 crc kubenswrapper[3561]: I1203 00:36:12.497254 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6755fc87b7-lptzn"] Dec 03 00:36:12 crc kubenswrapper[3561]: W1203 00:36:12.502551 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2b2911c_9415_4aac_a85d_b29266af55c2.slice/crio-4b3f71739a439aea2651293120cdd935837da51ffeeb5b818c977c6c7f6f556f WatchSource:0}: Error finding container 4b3f71739a439aea2651293120cdd935837da51ffeeb5b818c977c6c7f6f556f: Status 404 returned error can't find the container with id 4b3f71739a439aea2651293120cdd935837da51ffeeb5b818c977c6c7f6f556f Dec 03 00:36:12 crc kubenswrapper[3561]: I1203 00:36:12.619574 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6755fc87b7-lptzn" event={"ID":"b2b2911c-9415-4aac-a85d-b29266af55c2","Type":"ContainerStarted","Data":"4b3f71739a439aea2651293120cdd935837da51ffeeb5b818c977c6c7f6f556f"} Dec 03 00:36:12 crc kubenswrapper[3561]: I1203 00:36:12.621117 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"782f2197-f3e1-4ea1-988f-7acbd394c9e8","Type":"ContainerStarted","Data":"531469437cbc3a4fe1fddcceaa78c6ed8766af7692dc960d5e88b8679ae7346c"} Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.230656 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.231158 3561 topology_manager.go:215] "Topology Admit Handler" podUID="8883bba5-6555-4f12-9e01-5e4fc4712a25" podNamespace="service-telemetry" podName="alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.233225 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.235789 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.237945 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.238022 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.238068 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.238365 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-9nn8d" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.240114 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.240137 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.400340 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.400673 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8883bba5-6555-4f12-9e01-5e4fc4712a25-tls-assets\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.400706 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw8fl\" (UniqueName: \"kubernetes.io/projected/8883bba5-6555-4f12-9e01-5e4fc4712a25-kube-api-access-dw8fl\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.400732 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2c50b4d5-c70e-42bd-9843-41b466c65602\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2c50b4d5-c70e-42bd-9843-41b466c65602\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.400762 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8883bba5-6555-4f12-9e01-5e4fc4712a25-config-out\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.400790 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.400824 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-web-config\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.400846 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-config-volume\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.400865 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.501889 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8883bba5-6555-4f12-9e01-5e4fc4712a25-tls-assets\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.501950 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dw8fl\" (UniqueName: \"kubernetes.io/projected/8883bba5-6555-4f12-9e01-5e4fc4712a25-kube-api-access-dw8fl\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.501982 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-2c50b4d5-c70e-42bd-9843-41b466c65602\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2c50b4d5-c70e-42bd-9843-41b466c65602\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.502015 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8883bba5-6555-4f12-9e01-5e4fc4712a25-config-out\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.502048 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.502073 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-web-config\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.502096 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-config-volume\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.502116 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.502155 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: E1203 00:36:15.503071 3561 secret.go:194] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 03 00:36:15 crc kubenswrapper[3561]: E1203 00:36:15.503126 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-secret-default-alertmanager-proxy-tls podName:8883bba5-6555-4f12-9e01-5e4fc4712a25 nodeName:}" failed. No retries permitted until 2025-12-03 00:36:16.003113088 +0000 UTC m=+1774.783547346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "8883bba5-6555-4f12-9e01-5e4fc4712a25") : secret "default-alertmanager-proxy-tls" not found Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.506952 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.507031 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-config-volume\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.507166 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-web-config\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.508162 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8883bba5-6555-4f12-9e01-5e4fc4712a25-config-out\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.510926 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8883bba5-6555-4f12-9e01-5e4fc4712a25-tls-assets\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.515058 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.515808 3561 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.515929 3561 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-2c50b4d5-c70e-42bd-9843-41b466c65602\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2c50b4d5-c70e-42bd-9843-41b466c65602\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/87706bf76e7c1530e46b8fad0b5cac14546bb73f3780eae3e4991641946d373f/globalmount\"" pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.524720 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw8fl\" (UniqueName: \"kubernetes.io/projected/8883bba5-6555-4f12-9e01-5e4fc4712a25-kube-api-access-dw8fl\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:15 crc kubenswrapper[3561]: I1203 00:36:15.561159 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-2c50b4d5-c70e-42bd-9843-41b466c65602\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2c50b4d5-c70e-42bd-9843-41b466c65602\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:16 crc kubenswrapper[3561]: I1203 00:36:16.009435 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:16 crc kubenswrapper[3561]: E1203 00:36:16.009634 3561 secret.go:194] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 03 00:36:16 crc kubenswrapper[3561]: E1203 00:36:16.009749 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-secret-default-alertmanager-proxy-tls podName:8883bba5-6555-4f12-9e01-5e4fc4712a25 nodeName:}" failed. No retries permitted until 2025-12-03 00:36:17.00971271 +0000 UTC m=+1775.790146968 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "8883bba5-6555-4f12-9e01-5e4fc4712a25") : secret "default-alertmanager-proxy-tls" not found Dec 03 00:36:17 crc kubenswrapper[3561]: I1203 00:36:17.022833 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:17 crc kubenswrapper[3561]: E1203 00:36:17.023047 3561 secret.go:194] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 03 00:36:17 crc kubenswrapper[3561]: E1203 00:36:17.023174 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-secret-default-alertmanager-proxy-tls podName:8883bba5-6555-4f12-9e01-5e4fc4712a25 nodeName:}" failed. No retries permitted until 2025-12-03 00:36:19.023145136 +0000 UTC m=+1777.803579414 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "8883bba5-6555-4f12-9e01-5e4fc4712a25") : secret "default-alertmanager-proxy-tls" not found Dec 03 00:36:17 crc kubenswrapper[3561]: I1203 00:36:17.718294 3561 generic.go:334] "Generic (PLEG): container finished" podID="782f2197-f3e1-4ea1-988f-7acbd394c9e8" containerID="531469437cbc3a4fe1fddcceaa78c6ed8766af7692dc960d5e88b8679ae7346c" exitCode=0 Dec 03 00:36:17 crc kubenswrapper[3561]: I1203 00:36:17.718507 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"782f2197-f3e1-4ea1-988f-7acbd394c9e8","Type":"ContainerDied","Data":"531469437cbc3a4fe1fddcceaa78c6ed8766af7692dc960d5e88b8679ae7346c"} Dec 03 00:36:19 crc kubenswrapper[3561]: I1203 00:36:19.105552 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:19 crc kubenswrapper[3561]: I1203 00:36:19.111651 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8883bba5-6555-4f12-9e01-5e4fc4712a25-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"8883bba5-6555-4f12-9e01-5e4fc4712a25\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:19 crc kubenswrapper[3561]: I1203 00:36:19.153664 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:19 crc kubenswrapper[3561]: I1203 00:36:19.665789 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:36:19 crc kubenswrapper[3561]: E1203 00:36:19.672137 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:36:20 crc kubenswrapper[3561]: I1203 00:36:20.566707 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 03 00:36:20 crc kubenswrapper[3561]: W1203 00:36:20.744368 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8883bba5_6555_4f12_9e01_5e4fc4712a25.slice/crio-391aed145b5c0f0300eb19201303a4c228e5cc5096b7d9564222bcafeaf52793 WatchSource:0}: Error finding container 391aed145b5c0f0300eb19201303a4c228e5cc5096b7d9564222bcafeaf52793: Status 404 returned error can't find the container with id 391aed145b5c0f0300eb19201303a4c228e5cc5096b7d9564222bcafeaf52793 Dec 03 00:36:21 crc kubenswrapper[3561]: I1203 00:36:21.751666 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"8883bba5-6555-4f12-9e01-5e4fc4712a25","Type":"ContainerStarted","Data":"391aed145b5c0f0300eb19201303a4c228e5cc5096b7d9564222bcafeaf52793"} Dec 03 00:36:21 crc kubenswrapper[3561]: I1203 00:36:21.756951 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6755fc87b7-lptzn" event={"ID":"b2b2911c-9415-4aac-a85d-b29266af55c2","Type":"ContainerStarted","Data":"ed1820791df16a73a23dc9c6dd635b52976d8e553104b6fc664ebfed8aefbe2f"} Dec 03 00:36:21 crc kubenswrapper[3561]: I1203 00:36:21.800889 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6755fc87b7-lptzn" podStartSLOduration=2.963525876 podStartE2EDuration="10.800830664s" podCreationTimestamp="2025-12-03 00:36:11 +0000 UTC" firstStartedPulling="2025-12-03 00:36:12.503984806 +0000 UTC m=+1771.284419064" lastFinishedPulling="2025-12-03 00:36:20.341289594 +0000 UTC m=+1779.121723852" observedRunningTime="2025-12-03 00:36:21.797712166 +0000 UTC m=+1780.578146424" watchObservedRunningTime="2025-12-03 00:36:21.800830664 +0000 UTC m=+1780.581264922" Dec 03 00:36:24 crc kubenswrapper[3561]: I1203 00:36:24.774074 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"8883bba5-6555-4f12-9e01-5e4fc4712a25","Type":"ContainerStarted","Data":"47fcc684a1b80c6639f7f985ff3a4b46a813e2b29f28f0059d0604d96906cafc"} Dec 03 00:36:25 crc kubenswrapper[3561]: I1203 00:36:25.781073 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"782f2197-f3e1-4ea1-988f-7acbd394c9e8","Type":"ContainerStarted","Data":"987af8ff7d1cba8131c4b7de3b6500bcef4a5b2710bfa763b389446f027eef84"} Dec 03 00:36:28 crc kubenswrapper[3561]: I1203 00:36:28.797258 3561 generic.go:334] "Generic (PLEG): container finished" podID="8883bba5-6555-4f12-9e01-5e4fc4712a25" containerID="47fcc684a1b80c6639f7f985ff3a4b46a813e2b29f28f0059d0604d96906cafc" exitCode=0 Dec 03 00:36:28 crc kubenswrapper[3561]: I1203 00:36:28.797350 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"8883bba5-6555-4f12-9e01-5e4fc4712a25","Type":"ContainerDied","Data":"47fcc684a1b80c6639f7f985ff3a4b46a813e2b29f28f0059d0604d96906cafc"} Dec 03 00:36:29 crc kubenswrapper[3561]: I1203 00:36:29.807532 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"782f2197-f3e1-4ea1-988f-7acbd394c9e8","Type":"ContainerStarted","Data":"909d6c71fcf6488fb54c9b7bc6a8a4619c677346b0d1678460307dc033e4c456"} Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.293106 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws"] Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.293476 3561 topology_manager.go:215] "Topology Admit Handler" podUID="47bafc37-8cfb-48a9-aa4d-4a486fae79df" podNamespace="service-telemetry" podName="default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.294737 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.298086 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-q5pjv" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.298508 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.298567 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.298676 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.309678 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/47bafc37-8cfb-48a9-aa4d-4a486fae79df-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.309726 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsw5k\" (UniqueName: \"kubernetes.io/projected/47bafc37-8cfb-48a9-aa4d-4a486fae79df-kube-api-access-gsw5k\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.309822 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/47bafc37-8cfb-48a9-aa4d-4a486fae79df-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.309953 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/47bafc37-8cfb-48a9-aa4d-4a486fae79df-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.310038 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/47bafc37-8cfb-48a9-aa4d-4a486fae79df-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.311805 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws"] Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.410937 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-gsw5k\" (UniqueName: \"kubernetes.io/projected/47bafc37-8cfb-48a9-aa4d-4a486fae79df-kube-api-access-gsw5k\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.411019 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/47bafc37-8cfb-48a9-aa4d-4a486fae79df-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.411054 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/47bafc37-8cfb-48a9-aa4d-4a486fae79df-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.411109 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/47bafc37-8cfb-48a9-aa4d-4a486fae79df-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.411141 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/47bafc37-8cfb-48a9-aa4d-4a486fae79df-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:30 crc kubenswrapper[3561]: E1203 00:36:30.411248 3561 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 03 00:36:30 crc kubenswrapper[3561]: E1203 00:36:30.411301 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47bafc37-8cfb-48a9-aa4d-4a486fae79df-default-cloud1-coll-meter-proxy-tls podName:47bafc37-8cfb-48a9-aa4d-4a486fae79df nodeName:}" failed. No retries permitted until 2025-12-03 00:36:30.911281168 +0000 UTC m=+1789.691715426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/47bafc37-8cfb-48a9-aa4d-4a486fae79df-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" (UID: "47bafc37-8cfb-48a9-aa4d-4a486fae79df") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.412904 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/47bafc37-8cfb-48a9-aa4d-4a486fae79df-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.413174 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/47bafc37-8cfb-48a9-aa4d-4a486fae79df-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.420242 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/47bafc37-8cfb-48a9-aa4d-4a486fae79df-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.443230 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsw5k\" (UniqueName: \"kubernetes.io/projected/47bafc37-8cfb-48a9-aa4d-4a486fae79df-kube-api-access-gsw5k\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:30 crc kubenswrapper[3561]: I1203 00:36:30.916243 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/47bafc37-8cfb-48a9-aa4d-4a486fae79df-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:30 crc kubenswrapper[3561]: E1203 00:36:30.916432 3561 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 03 00:36:30 crc kubenswrapper[3561]: E1203 00:36:30.916513 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47bafc37-8cfb-48a9-aa4d-4a486fae79df-default-cloud1-coll-meter-proxy-tls podName:47bafc37-8cfb-48a9-aa4d-4a486fae79df nodeName:}" failed. No retries permitted until 2025-12-03 00:36:31.916494545 +0000 UTC m=+1790.696928803 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/47bafc37-8cfb-48a9-aa4d-4a486fae79df-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" (UID: "47bafc37-8cfb-48a9-aa4d-4a486fae79df") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 03 00:36:31 crc kubenswrapper[3561]: I1203 00:36:31.870114 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"8883bba5-6555-4f12-9e01-5e4fc4712a25","Type":"ContainerStarted","Data":"8bb3784c1206f5570d87059f593cde19504267676165591c5af90fba77e896a1"} Dec 03 00:36:31 crc kubenswrapper[3561]: I1203 00:36:31.961957 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/47bafc37-8cfb-48a9-aa4d-4a486fae79df-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:31 crc kubenswrapper[3561]: E1203 00:36:31.962092 3561 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 03 00:36:31 crc kubenswrapper[3561]: E1203 00:36:31.962150 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47bafc37-8cfb-48a9-aa4d-4a486fae79df-default-cloud1-coll-meter-proxy-tls podName:47bafc37-8cfb-48a9-aa4d-4a486fae79df nodeName:}" failed. No retries permitted until 2025-12-03 00:36:33.962136304 +0000 UTC m=+1792.742570562 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/47bafc37-8cfb-48a9-aa4d-4a486fae79df-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" (UID: "47bafc37-8cfb-48a9-aa4d-4a486fae79df") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 03 00:36:32 crc kubenswrapper[3561]: I1203 00:36:32.663873 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:36:32 crc kubenswrapper[3561]: E1203 00:36:32.664413 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.614570 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq"] Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.614903 3561 topology_manager.go:215] "Topology Admit Handler" podUID="cef8c730-ff3a-47c2-841e-84e32db2bd53" podNamespace="service-telemetry" podName="default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.615989 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.620795 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.620795 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.623235 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq"] Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.687515 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/cef8c730-ff3a-47c2-841e-84e32db2bd53-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq\" (UID: \"cef8c730-ff3a-47c2-841e-84e32db2bd53\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.687589 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/cef8c730-ff3a-47c2-841e-84e32db2bd53-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq\" (UID: \"cef8c730-ff3a-47c2-841e-84e32db2bd53\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.687648 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffpsk\" (UniqueName: \"kubernetes.io/projected/cef8c730-ff3a-47c2-841e-84e32db2bd53-kube-api-access-ffpsk\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq\" (UID: \"cef8c730-ff3a-47c2-841e-84e32db2bd53\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.687688 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/cef8c730-ff3a-47c2-841e-84e32db2bd53-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq\" (UID: \"cef8c730-ff3a-47c2-841e-84e32db2bd53\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.687748 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/cef8c730-ff3a-47c2-841e-84e32db2bd53-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq\" (UID: \"cef8c730-ff3a-47c2-841e-84e32db2bd53\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.788573 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/cef8c730-ff3a-47c2-841e-84e32db2bd53-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq\" (UID: \"cef8c730-ff3a-47c2-841e-84e32db2bd53\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.788641 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/cef8c730-ff3a-47c2-841e-84e32db2bd53-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq\" (UID: \"cef8c730-ff3a-47c2-841e-84e32db2bd53\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.788676 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ffpsk\" (UniqueName: \"kubernetes.io/projected/cef8c730-ff3a-47c2-841e-84e32db2bd53-kube-api-access-ffpsk\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq\" (UID: \"cef8c730-ff3a-47c2-841e-84e32db2bd53\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.788701 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/cef8c730-ff3a-47c2-841e-84e32db2bd53-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq\" (UID: \"cef8c730-ff3a-47c2-841e-84e32db2bd53\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:33 crc kubenswrapper[3561]: E1203 00:36:33.788761 3561 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 03 00:36:33 crc kubenswrapper[3561]: E1203 00:36:33.788836 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cef8c730-ff3a-47c2-841e-84e32db2bd53-default-cloud1-ceil-meter-proxy-tls podName:cef8c730-ff3a-47c2-841e-84e32db2bd53 nodeName:}" failed. No retries permitted until 2025-12-03 00:36:34.288816848 +0000 UTC m=+1793.069251106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/cef8c730-ff3a-47c2-841e-84e32db2bd53-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" (UID: "cef8c730-ff3a-47c2-841e-84e32db2bd53") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.788767 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/cef8c730-ff3a-47c2-841e-84e32db2bd53-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq\" (UID: \"cef8c730-ff3a-47c2-841e-84e32db2bd53\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.789164 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/cef8c730-ff3a-47c2-841e-84e32db2bd53-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq\" (UID: \"cef8c730-ff3a-47c2-841e-84e32db2bd53\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.789795 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/cef8c730-ff3a-47c2-841e-84e32db2bd53-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq\" (UID: \"cef8c730-ff3a-47c2-841e-84e32db2bd53\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.796042 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/cef8c730-ff3a-47c2-841e-84e32db2bd53-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq\" (UID: \"cef8c730-ff3a-47c2-841e-84e32db2bd53\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.820268 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffpsk\" (UniqueName: \"kubernetes.io/projected/cef8c730-ff3a-47c2-841e-84e32db2bd53-kube-api-access-ffpsk\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq\" (UID: \"cef8c730-ff3a-47c2-841e-84e32db2bd53\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.990303 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/47bafc37-8cfb-48a9-aa4d-4a486fae79df-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:33 crc kubenswrapper[3561]: I1203 00:36:33.993514 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/47bafc37-8cfb-48a9-aa4d-4a486fae79df-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws\" (UID: \"47bafc37-8cfb-48a9-aa4d-4a486fae79df\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:34 crc kubenswrapper[3561]: I1203 00:36:34.224326 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" Dec 03 00:36:34 crc kubenswrapper[3561]: I1203 00:36:34.313932 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/cef8c730-ff3a-47c2-841e-84e32db2bd53-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq\" (UID: \"cef8c730-ff3a-47c2-841e-84e32db2bd53\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:34 crc kubenswrapper[3561]: E1203 00:36:34.314083 3561 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 03 00:36:34 crc kubenswrapper[3561]: E1203 00:36:34.314144 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cef8c730-ff3a-47c2-841e-84e32db2bd53-default-cloud1-ceil-meter-proxy-tls podName:cef8c730-ff3a-47c2-841e-84e32db2bd53 nodeName:}" failed. No retries permitted until 2025-12-03 00:36:35.31412785 +0000 UTC m=+1794.094562108 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/cef8c730-ff3a-47c2-841e-84e32db2bd53-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" (UID: "cef8c730-ff3a-47c2-841e-84e32db2bd53") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 03 00:36:35 crc kubenswrapper[3561]: I1203 00:36:35.330565 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/cef8c730-ff3a-47c2-841e-84e32db2bd53-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq\" (UID: \"cef8c730-ff3a-47c2-841e-84e32db2bd53\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:35 crc kubenswrapper[3561]: I1203 00:36:35.337351 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/cef8c730-ff3a-47c2-841e-84e32db2bd53-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq\" (UID: \"cef8c730-ff3a-47c2-841e-84e32db2bd53\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:35 crc kubenswrapper[3561]: I1203 00:36:35.431985 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" Dec 03 00:36:36 crc kubenswrapper[3561]: I1203 00:36:36.899710 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"8883bba5-6555-4f12-9e01-5e4fc4712a25","Type":"ContainerStarted","Data":"6a2f65c8288d2301b41dee647886fdc5551d8d9767d26faa066b880c2a703a4c"} Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.203717 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9"] Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.203893 3561 topology_manager.go:215] "Topology Admit Handler" podUID="8c361765-4740-4085-aff5-4504b5f660f6" podNamespace="service-telemetry" podName="default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.206360 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.209035 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9"] Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.214819 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.215290 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.273959 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8c361765-4740-4085-aff5-4504b5f660f6-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.274010 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/8c361765-4740-4085-aff5-4504b5f660f6-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.274042 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxshd\" (UniqueName: \"kubernetes.io/projected/8c361765-4740-4085-aff5-4504b5f660f6-kube-api-access-qxshd\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.274064 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8c361765-4740-4085-aff5-4504b5f660f6-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.274089 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c361765-4740-4085-aff5-4504b5f660f6-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.375100 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/8c361765-4740-4085-aff5-4504b5f660f6-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.375165 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qxshd\" (UniqueName: \"kubernetes.io/projected/8c361765-4740-4085-aff5-4504b5f660f6-kube-api-access-qxshd\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.375215 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8c361765-4740-4085-aff5-4504b5f660f6-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.375246 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c361765-4740-4085-aff5-4504b5f660f6-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:38 crc kubenswrapper[3561]: E1203 00:36:38.375600 3561 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 03 00:36:38 crc kubenswrapper[3561]: E1203 00:36:38.375669 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c361765-4740-4085-aff5-4504b5f660f6-default-cloud1-sens-meter-proxy-tls podName:8c361765-4740-4085-aff5-4504b5f660f6 nodeName:}" failed. No retries permitted until 2025-12-03 00:36:38.875649697 +0000 UTC m=+1797.656083955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/8c361765-4740-4085-aff5-4504b5f660f6-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" (UID: "8c361765-4740-4085-aff5-4504b5f660f6") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.376170 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8c361765-4740-4085-aff5-4504b5f660f6-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.376271 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8c361765-4740-4085-aff5-4504b5f660f6-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.376725 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8c361765-4740-4085-aff5-4504b5f660f6-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.381625 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/8c361765-4740-4085-aff5-4504b5f660f6-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.403978 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxshd\" (UniqueName: \"kubernetes.io/projected/8c361765-4740-4085-aff5-4504b5f660f6-kube-api-access-qxshd\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:38 crc kubenswrapper[3561]: I1203 00:36:38.882684 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c361765-4740-4085-aff5-4504b5f660f6-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:38 crc kubenswrapper[3561]: E1203 00:36:38.884688 3561 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 03 00:36:38 crc kubenswrapper[3561]: E1203 00:36:38.884752 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c361765-4740-4085-aff5-4504b5f660f6-default-cloud1-sens-meter-proxy-tls podName:8c361765-4740-4085-aff5-4504b5f660f6 nodeName:}" failed. No retries permitted until 2025-12-03 00:36:39.884735023 +0000 UTC m=+1798.665169281 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/8c361765-4740-4085-aff5-4504b5f660f6-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" (UID: "8c361765-4740-4085-aff5-4504b5f660f6") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 03 00:36:39 crc kubenswrapper[3561]: I1203 00:36:39.099701 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws"] Dec 03 00:36:39 crc kubenswrapper[3561]: W1203 00:36:39.106640 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47bafc37_8cfb_48a9_aa4d_4a486fae79df.slice/crio-b96201d49acdfe3e71e9c81a8d61ecbd92596e6cf16fcc3cec930dc5cde7478f WatchSource:0}: Error finding container b96201d49acdfe3e71e9c81a8d61ecbd92596e6cf16fcc3cec930dc5cde7478f: Status 404 returned error can't find the container with id b96201d49acdfe3e71e9c81a8d61ecbd92596e6cf16fcc3cec930dc5cde7478f Dec 03 00:36:39 crc kubenswrapper[3561]: I1203 00:36:39.174497 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq"] Dec 03 00:36:39 crc kubenswrapper[3561]: W1203 00:36:39.177572 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcef8c730_ff3a_47c2_841e_84e32db2bd53.slice/crio-4cbaf124174cdf8b3054470c85e8e513e7eaf9035502e69b7b9334f5a0f279fc WatchSource:0}: Error finding container 4cbaf124174cdf8b3054470c85e8e513e7eaf9035502e69b7b9334f5a0f279fc: Status 404 returned error can't find the container with id 4cbaf124174cdf8b3054470c85e8e513e7eaf9035502e69b7b9334f5a0f279fc Dec 03 00:36:39 crc kubenswrapper[3561]: I1203 00:36:39.897231 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c361765-4740-4085-aff5-4504b5f660f6-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:39 crc kubenswrapper[3561]: E1203 00:36:39.897370 3561 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 03 00:36:39 crc kubenswrapper[3561]: E1203 00:36:39.897426 3561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c361765-4740-4085-aff5-4504b5f660f6-default-cloud1-sens-meter-proxy-tls podName:8c361765-4740-4085-aff5-4504b5f660f6 nodeName:}" failed. No retries permitted until 2025-12-03 00:36:41.897411607 +0000 UTC m=+1800.677845875 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/8c361765-4740-4085-aff5-4504b5f660f6-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" (UID: "8c361765-4740-4085-aff5-4504b5f660f6") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 03 00:36:39 crc kubenswrapper[3561]: I1203 00:36:39.931053 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"8883bba5-6555-4f12-9e01-5e4fc4712a25","Type":"ContainerStarted","Data":"85f89f42961091179841886cfe32e5111f16eb90a8fce5202675e9f59ccc550b"} Dec 03 00:36:39 crc kubenswrapper[3561]: I1203 00:36:39.944488 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" event={"ID":"cef8c730-ff3a-47c2-841e-84e32db2bd53","Type":"ContainerStarted","Data":"4cbaf124174cdf8b3054470c85e8e513e7eaf9035502e69b7b9334f5a0f279fc"} Dec 03 00:36:39 crc kubenswrapper[3561]: I1203 00:36:39.962774 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" event={"ID":"47bafc37-8cfb-48a9-aa4d-4a486fae79df","Type":"ContainerStarted","Data":"44eb5de61a0c0246c94f6ada474502854c8d971cc5660e5e627bac08495a5d6c"} Dec 03 00:36:39 crc kubenswrapper[3561]: I1203 00:36:39.962814 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" event={"ID":"47bafc37-8cfb-48a9-aa4d-4a486fae79df","Type":"ContainerStarted","Data":"b96201d49acdfe3e71e9c81a8d61ecbd92596e6cf16fcc3cec930dc5cde7478f"} Dec 03 00:36:39 crc kubenswrapper[3561]: I1203 00:36:39.988500 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=15.849041466 podStartE2EDuration="25.988449944s" podCreationTimestamp="2025-12-03 00:36:14 +0000 UTC" firstStartedPulling="2025-12-03 00:36:28.799252436 +0000 UTC m=+1787.579686694" lastFinishedPulling="2025-12-03 00:36:38.938660914 +0000 UTC m=+1797.719095172" observedRunningTime="2025-12-03 00:36:39.987238196 +0000 UTC m=+1798.767672454" watchObservedRunningTime="2025-12-03 00:36:39.988449944 +0000 UTC m=+1798.768884202" Dec 03 00:36:40 crc kubenswrapper[3561]: I1203 00:36:40.004716 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"782f2197-f3e1-4ea1-988f-7acbd394c9e8","Type":"ContainerStarted","Data":"d1ee4b6fd27d8685d5020dad3b40718a182db040bc919900785808f247bd21cb"} Dec 03 00:36:41 crc kubenswrapper[3561]: I1203 00:36:41.045500 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" event={"ID":"cef8c730-ff3a-47c2-841e-84e32db2bd53","Type":"ContainerStarted","Data":"430ae2bacaf0385cb197b9dcbb3cd2b4c7b856deb7a5a6d46216176c9f4ad732"} Dec 03 00:36:41 crc kubenswrapper[3561]: I1203 00:36:41.586867 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:36:41 crc kubenswrapper[3561]: I1203 00:36:41.586943 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:36:41 crc kubenswrapper[3561]: I1203 00:36:41.586986 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:36:41 crc kubenswrapper[3561]: I1203 00:36:41.587002 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:36:41 crc kubenswrapper[3561]: I1203 00:36:41.587029 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:36:41 crc kubenswrapper[3561]: I1203 00:36:41.938588 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c361765-4740-4085-aff5-4504b5f660f6-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:41 crc kubenswrapper[3561]: I1203 00:36:41.943863 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c361765-4740-4085-aff5-4504b5f660f6-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9\" (UID: \"8c361765-4740-4085-aff5-4504b5f660f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:42 crc kubenswrapper[3561]: I1203 00:36:42.135491 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" Dec 03 00:36:42 crc kubenswrapper[3561]: I1203 00:36:42.783665 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=7.831858718 podStartE2EDuration="42.783527449s" podCreationTimestamp="2025-12-03 00:36:00 +0000 UTC" firstStartedPulling="2025-12-03 00:36:03.987249542 +0000 UTC m=+1762.767683800" lastFinishedPulling="2025-12-03 00:36:38.938918273 +0000 UTC m=+1797.719352531" observedRunningTime="2025-12-03 00:36:40.047554066 +0000 UTC m=+1798.827988324" watchObservedRunningTime="2025-12-03 00:36:42.783527449 +0000 UTC m=+1801.563961707" Dec 03 00:36:42 crc kubenswrapper[3561]: I1203 00:36:42.786317 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9"] Dec 03 00:36:43 crc kubenswrapper[3561]: I1203 00:36:43.778327 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Dec 03 00:36:44 crc kubenswrapper[3561]: I1203 00:36:44.663971 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:36:44 crc kubenswrapper[3561]: E1203 00:36:44.664410 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:36:45 crc kubenswrapper[3561]: W1203 00:36:45.817573 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c361765_4740_4085_aff5_4504b5f660f6.slice/crio-0c41f2b79ec5d38bade17b8e30aa917537d27942e736bec78ed1f165d370120c WatchSource:0}: Error finding container 0c41f2b79ec5d38bade17b8e30aa917537d27942e736bec78ed1f165d370120c: Status 404 returned error can't find the container with id 0c41f2b79ec5d38bade17b8e30aa917537d27942e736bec78ed1f165d370120c Dec 03 00:36:46 crc kubenswrapper[3561]: I1203 00:36:46.073453 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" event={"ID":"8c361765-4740-4085-aff5-4504b5f660f6","Type":"ContainerStarted","Data":"0c41f2b79ec5d38bade17b8e30aa917537d27942e736bec78ed1f165d370120c"} Dec 03 00:36:46 crc kubenswrapper[3561]: I1203 00:36:46.812562 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl"] Dec 03 00:36:46 crc kubenswrapper[3561]: I1203 00:36:46.813118 3561 topology_manager.go:215] "Topology Admit Handler" podUID="0253e9f5-0847-46ef-a8aa-b1282413e68a" podNamespace="service-telemetry" podName="default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" Dec 03 00:36:46 crc kubenswrapper[3561]: I1203 00:36:46.814158 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" Dec 03 00:36:46 crc kubenswrapper[3561]: I1203 00:36:46.816498 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Dec 03 00:36:46 crc kubenswrapper[3561]: I1203 00:36:46.818138 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Dec 03 00:36:46 crc kubenswrapper[3561]: I1203 00:36:46.821917 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl"] Dec 03 00:36:46 crc kubenswrapper[3561]: I1203 00:36:46.923068 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0253e9f5-0847-46ef-a8aa-b1282413e68a-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl\" (UID: \"0253e9f5-0847-46ef-a8aa-b1282413e68a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" Dec 03 00:36:46 crc kubenswrapper[3561]: I1203 00:36:46.923297 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0253e9f5-0847-46ef-a8aa-b1282413e68a-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl\" (UID: \"0253e9f5-0847-46ef-a8aa-b1282413e68a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" Dec 03 00:36:46 crc kubenswrapper[3561]: I1203 00:36:46.923386 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/0253e9f5-0847-46ef-a8aa-b1282413e68a-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl\" (UID: \"0253e9f5-0847-46ef-a8aa-b1282413e68a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" Dec 03 00:36:46 crc kubenswrapper[3561]: I1203 00:36:46.923484 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4p2g\" (UniqueName: \"kubernetes.io/projected/0253e9f5-0847-46ef-a8aa-b1282413e68a-kube-api-access-q4p2g\") pod \"default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl\" (UID: \"0253e9f5-0847-46ef-a8aa-b1282413e68a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" Dec 03 00:36:47 crc kubenswrapper[3561]: I1203 00:36:47.024795 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0253e9f5-0847-46ef-a8aa-b1282413e68a-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl\" (UID: \"0253e9f5-0847-46ef-a8aa-b1282413e68a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" Dec 03 00:36:47 crc kubenswrapper[3561]: I1203 00:36:47.024849 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0253e9f5-0847-46ef-a8aa-b1282413e68a-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl\" (UID: \"0253e9f5-0847-46ef-a8aa-b1282413e68a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" Dec 03 00:36:47 crc kubenswrapper[3561]: I1203 00:36:47.024871 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/0253e9f5-0847-46ef-a8aa-b1282413e68a-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl\" (UID: \"0253e9f5-0847-46ef-a8aa-b1282413e68a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" Dec 03 00:36:47 crc kubenswrapper[3561]: I1203 00:36:47.024913 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-q4p2g\" (UniqueName: \"kubernetes.io/projected/0253e9f5-0847-46ef-a8aa-b1282413e68a-kube-api-access-q4p2g\") pod \"default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl\" (UID: \"0253e9f5-0847-46ef-a8aa-b1282413e68a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" Dec 03 00:36:47 crc kubenswrapper[3561]: I1203 00:36:47.025583 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0253e9f5-0847-46ef-a8aa-b1282413e68a-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl\" (UID: \"0253e9f5-0847-46ef-a8aa-b1282413e68a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" Dec 03 00:36:47 crc kubenswrapper[3561]: I1203 00:36:47.026172 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0253e9f5-0847-46ef-a8aa-b1282413e68a-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl\" (UID: \"0253e9f5-0847-46ef-a8aa-b1282413e68a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" Dec 03 00:36:47 crc kubenswrapper[3561]: I1203 00:36:47.031193 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/0253e9f5-0847-46ef-a8aa-b1282413e68a-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl\" (UID: \"0253e9f5-0847-46ef-a8aa-b1282413e68a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" Dec 03 00:36:47 crc kubenswrapper[3561]: I1203 00:36:47.046885 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4p2g\" (UniqueName: \"kubernetes.io/projected/0253e9f5-0847-46ef-a8aa-b1282413e68a-kube-api-access-q4p2g\") pod \"default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl\" (UID: \"0253e9f5-0847-46ef-a8aa-b1282413e68a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" Dec 03 00:36:47 crc kubenswrapper[3561]: I1203 00:36:47.085861 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" event={"ID":"8c361765-4740-4085-aff5-4504b5f660f6","Type":"ContainerStarted","Data":"d7153eb9b473b8edb4854e630730ded0d66740ce156b789684a7b77ebc85adb1"} Dec 03 00:36:47 crc kubenswrapper[3561]: I1203 00:36:47.087972 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" event={"ID":"cef8c730-ff3a-47c2-841e-84e32db2bd53","Type":"ContainerStarted","Data":"0d09bb6dddafd03550e791de8ad5c3ef2ba485af7a66bf0a5791213552401b1d"} Dec 03 00:36:47 crc kubenswrapper[3561]: I1203 00:36:47.089439 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" event={"ID":"47bafc37-8cfb-48a9-aa4d-4a486fae79df","Type":"ContainerStarted","Data":"93dcd29d6b6ee064612c1ebdc89ed7fb935f68214b7d50349844a6d74c6a0d39"} Dec 03 00:36:47 crc kubenswrapper[3561]: I1203 00:36:47.144467 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.030346 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl"] Dec 03 00:36:48 crc kubenswrapper[3561]: W1203 00:36:48.047214 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0253e9f5_0847_46ef_a8aa_b1282413e68a.slice/crio-7947701b2d8c40060f50a51d15ff05df79cb8dbb38cbf3b260be4567d4e7974b WatchSource:0}: Error finding container 7947701b2d8c40060f50a51d15ff05df79cb8dbb38cbf3b260be4567d4e7974b: Status 404 returned error can't find the container with id 7947701b2d8c40060f50a51d15ff05df79cb8dbb38cbf3b260be4567d4e7974b Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.099222 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" event={"ID":"8c361765-4740-4085-aff5-4504b5f660f6","Type":"ContainerStarted","Data":"ef636a7a9b0c63b9c3f8724fcb21a39e5c5dc50faf45cdd92edc45e894ebd2dd"} Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.100521 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" event={"ID":"0253e9f5-0847-46ef-a8aa-b1282413e68a","Type":"ContainerStarted","Data":"7947701b2d8c40060f50a51d15ff05df79cb8dbb38cbf3b260be4567d4e7974b"} Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.626139 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp"] Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.626521 3561 topology_manager.go:215] "Topology Admit Handler" podUID="bb251e10-96d1-40d2-9124-da20277237f7" podNamespace="service-telemetry" podName="default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.627459 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.632781 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.642479 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp"] Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.752101 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g6d6\" (UniqueName: \"kubernetes.io/projected/bb251e10-96d1-40d2-9124-da20277237f7-kube-api-access-7g6d6\") pod \"default-cloud1-ceil-event-smartgateway-68466754b4-br6cp\" (UID: \"bb251e10-96d1-40d2-9124-da20277237f7\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.752175 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/bb251e10-96d1-40d2-9124-da20277237f7-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-68466754b4-br6cp\" (UID: \"bb251e10-96d1-40d2-9124-da20277237f7\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.752220 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/bb251e10-96d1-40d2-9124-da20277237f7-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-68466754b4-br6cp\" (UID: \"bb251e10-96d1-40d2-9124-da20277237f7\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.752368 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/bb251e10-96d1-40d2-9124-da20277237f7-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-68466754b4-br6cp\" (UID: \"bb251e10-96d1-40d2-9124-da20277237f7\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.779047 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.853587 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/bb251e10-96d1-40d2-9124-da20277237f7-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-68466754b4-br6cp\" (UID: \"bb251e10-96d1-40d2-9124-da20277237f7\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.854381 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7g6d6\" (UniqueName: \"kubernetes.io/projected/bb251e10-96d1-40d2-9124-da20277237f7-kube-api-access-7g6d6\") pod \"default-cloud1-ceil-event-smartgateway-68466754b4-br6cp\" (UID: \"bb251e10-96d1-40d2-9124-da20277237f7\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.854436 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/bb251e10-96d1-40d2-9124-da20277237f7-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-68466754b4-br6cp\" (UID: \"bb251e10-96d1-40d2-9124-da20277237f7\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.854468 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/bb251e10-96d1-40d2-9124-da20277237f7-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-68466754b4-br6cp\" (UID: \"bb251e10-96d1-40d2-9124-da20277237f7\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.854551 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/bb251e10-96d1-40d2-9124-da20277237f7-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-68466754b4-br6cp\" (UID: \"bb251e10-96d1-40d2-9124-da20277237f7\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.854915 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/bb251e10-96d1-40d2-9124-da20277237f7-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-68466754b4-br6cp\" (UID: \"bb251e10-96d1-40d2-9124-da20277237f7\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.862711 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/bb251e10-96d1-40d2-9124-da20277237f7-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-68466754b4-br6cp\" (UID: \"bb251e10-96d1-40d2-9124-da20277237f7\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.869496 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g6d6\" (UniqueName: \"kubernetes.io/projected/bb251e10-96d1-40d2-9124-da20277237f7-kube-api-access-7g6d6\") pod \"default-cloud1-ceil-event-smartgateway-68466754b4-br6cp\" (UID: \"bb251e10-96d1-40d2-9124-da20277237f7\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.906368 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Dec 03 00:36:48 crc kubenswrapper[3561]: I1203 00:36:48.963002 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" Dec 03 00:36:49 crc kubenswrapper[3561]: I1203 00:36:49.124488 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" event={"ID":"0253e9f5-0847-46ef-a8aa-b1282413e68a","Type":"ContainerStarted","Data":"ad7516bab490e15d0fd82fa182383c8cfe5d2f063bb835359bfe7e1943a19aef"} Dec 03 00:36:49 crc kubenswrapper[3561]: I1203 00:36:49.237803 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Dec 03 00:36:49 crc kubenswrapper[3561]: I1203 00:36:49.286298 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp"] Dec 03 00:36:56 crc kubenswrapper[3561]: I1203 00:36:56.664460 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:36:56 crc kubenswrapper[3561]: E1203 00:36:56.665388 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:36:57 crc kubenswrapper[3561]: W1203 00:36:57.927208 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb251e10_96d1_40d2_9124_da20277237f7.slice/crio-4edf0a7e5680a879a5f8762b125cac1dd7a694e7fc3717c7f28de597d7fbf1bf WatchSource:0}: Error finding container 4edf0a7e5680a879a5f8762b125cac1dd7a694e7fc3717c7f28de597d7fbf1bf: Status 404 returned error can't find the container with id 4edf0a7e5680a879a5f8762b125cac1dd7a694e7fc3717c7f28de597d7fbf1bf Dec 03 00:36:58 crc kubenswrapper[3561]: I1203 00:36:58.188102 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" event={"ID":"bb251e10-96d1-40d2-9124-da20277237f7","Type":"ContainerStarted","Data":"4edf0a7e5680a879a5f8762b125cac1dd7a694e7fc3717c7f28de597d7fbf1bf"} Dec 03 00:37:01 crc kubenswrapper[3561]: I1203 00:37:01.241300 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" event={"ID":"0253e9f5-0847-46ef-a8aa-b1282413e68a","Type":"ContainerStarted","Data":"fc2dd0bb12682e025d80f5fe23eacdf32702db0e58b912caa13e5afe7f338291"} Dec 03 00:37:01 crc kubenswrapper[3561]: I1203 00:37:01.245330 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" event={"ID":"8c361765-4740-4085-aff5-4504b5f660f6","Type":"ContainerStarted","Data":"e3770620d4f59a723e549d9d32b8de989057a534f8bbce8a8d846184275e6a9f"} Dec 03 00:37:01 crc kubenswrapper[3561]: I1203 00:37:01.250594 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" event={"ID":"bb251e10-96d1-40d2-9124-da20277237f7","Type":"ContainerStarted","Data":"2432014afbb946ee2f4017fff339bf9570408c0d0d3c6be7eedee7b53800a6ce"} Dec 03 00:37:01 crc kubenswrapper[3561]: I1203 00:37:01.250703 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" event={"ID":"bb251e10-96d1-40d2-9124-da20277237f7","Type":"ContainerStarted","Data":"4135331c4c7f7dde7624805d25a17d237c32be9effebe0cc61d0b920a2b49536"} Dec 03 00:37:01 crc kubenswrapper[3561]: I1203 00:37:01.256277 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" event={"ID":"cef8c730-ff3a-47c2-841e-84e32db2bd53","Type":"ContainerStarted","Data":"e5d2915c988b828ec1aeacfc840788e5580c72045f7365d72f33fb94b3283d5c"} Dec 03 00:37:01 crc kubenswrapper[3561]: I1203 00:37:01.262006 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" podStartSLOduration=2.991284368 podStartE2EDuration="15.259239129s" podCreationTimestamp="2025-12-03 00:36:46 +0000 UTC" firstStartedPulling="2025-12-03 00:36:48.070697816 +0000 UTC m=+1806.851132064" lastFinishedPulling="2025-12-03 00:37:00.338652567 +0000 UTC m=+1819.119086825" observedRunningTime="2025-12-03 00:37:01.258767695 +0000 UTC m=+1820.039201953" watchObservedRunningTime="2025-12-03 00:37:01.259239129 +0000 UTC m=+1820.039673387" Dec 03 00:37:01 crc kubenswrapper[3561]: I1203 00:37:01.267659 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" event={"ID":"47bafc37-8cfb-48a9-aa4d-4a486fae79df","Type":"ContainerStarted","Data":"0e6e8578ed5c4b4677cb77853f445fba94ad1f12263af364013f608734053c3c"} Dec 03 00:37:01 crc kubenswrapper[3561]: I1203 00:37:01.288715 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" podStartSLOduration=10.776712945 podStartE2EDuration="13.288605124s" podCreationTimestamp="2025-12-03 00:36:48 +0000 UTC" firstStartedPulling="2025-12-03 00:36:57.931017077 +0000 UTC m=+1816.711451325" lastFinishedPulling="2025-12-03 00:37:00.442909246 +0000 UTC m=+1819.223343504" observedRunningTime="2025-12-03 00:37:01.279774069 +0000 UTC m=+1820.060208337" watchObservedRunningTime="2025-12-03 00:37:01.288605124 +0000 UTC m=+1820.069039442" Dec 03 00:37:01 crc kubenswrapper[3561]: I1203 00:37:01.304623 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" podStartSLOduration=7.391357182 podStartE2EDuration="28.304576552s" podCreationTimestamp="2025-12-03 00:36:33 +0000 UTC" firstStartedPulling="2025-12-03 00:36:39.180663748 +0000 UTC m=+1797.961098006" lastFinishedPulling="2025-12-03 00:37:00.093883108 +0000 UTC m=+1818.874317376" observedRunningTime="2025-12-03 00:37:01.303554171 +0000 UTC m=+1820.083988429" watchObservedRunningTime="2025-12-03 00:37:01.304576552 +0000 UTC m=+1820.085010820" Dec 03 00:37:01 crc kubenswrapper[3561]: I1203 00:37:01.344400 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" podStartSLOduration=9.096492833 podStartE2EDuration="23.344266669s" podCreationTimestamp="2025-12-03 00:36:38 +0000 UTC" firstStartedPulling="2025-12-03 00:36:45.820351099 +0000 UTC m=+1804.600785357" lastFinishedPulling="2025-12-03 00:37:00.068124935 +0000 UTC m=+1818.848559193" observedRunningTime="2025-12-03 00:37:01.333352239 +0000 UTC m=+1820.113786497" watchObservedRunningTime="2025-12-03 00:37:01.344266669 +0000 UTC m=+1820.124700957" Dec 03 00:37:01 crc kubenswrapper[3561]: I1203 00:37:01.363952 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" podStartSLOduration=10.361633945 podStartE2EDuration="31.363902751s" podCreationTimestamp="2025-12-03 00:36:30 +0000 UTC" firstStartedPulling="2025-12-03 00:36:39.109165439 +0000 UTC m=+1797.889599687" lastFinishedPulling="2025-12-03 00:37:00.111434235 +0000 UTC m=+1818.891868493" observedRunningTime="2025-12-03 00:37:01.358352518 +0000 UTC m=+1820.138786786" watchObservedRunningTime="2025-12-03 00:37:01.363902751 +0000 UTC m=+1820.144337009" Dec 03 00:37:04 crc kubenswrapper[3561]: I1203 00:37:04.583758 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-z7qbr"] Dec 03 00:37:04 crc kubenswrapper[3561]: I1203 00:37:04.584212 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" podUID="0bc2b7e8-1108-42e8-8c78-b1653bb13d2e" containerName="default-interconnect" containerID="cri-o://a47d5adb571dcb524f5c81374ae3823b3ae3352fa6df9230d0c7a9e298285d02" gracePeriod=30 Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.048484 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.131006 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-inter-router-ca\") pod \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.131076 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-openstack-ca\") pod \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.131117 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-openstack-credentials\") pod \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.131168 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knw8b\" (UniqueName: \"kubernetes.io/projected/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-kube-api-access-knw8b\") pod \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.131255 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-sasl-users\") pod \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.131300 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-inter-router-credentials\") pod \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.131337 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-sasl-config\") pod \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\" (UID: \"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e\") " Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.138955 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "0bc2b7e8-1108-42e8-8c78-b1653bb13d2e" (UID: "0bc2b7e8-1108-42e8-8c78-b1653bb13d2e"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.142355 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "0bc2b7e8-1108-42e8-8c78-b1653bb13d2e" (UID: "0bc2b7e8-1108-42e8-8c78-b1653bb13d2e"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.142409 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "0bc2b7e8-1108-42e8-8c78-b1653bb13d2e" (UID: "0bc2b7e8-1108-42e8-8c78-b1653bb13d2e"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.142778 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-kube-api-access-knw8b" (OuterVolumeSpecName: "kube-api-access-knw8b") pod "0bc2b7e8-1108-42e8-8c78-b1653bb13d2e" (UID: "0bc2b7e8-1108-42e8-8c78-b1653bb13d2e"). InnerVolumeSpecName "kube-api-access-knw8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.142929 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "0bc2b7e8-1108-42e8-8c78-b1653bb13d2e" (UID: "0bc2b7e8-1108-42e8-8c78-b1653bb13d2e"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.161743 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "0bc2b7e8-1108-42e8-8c78-b1653bb13d2e" (UID: "0bc2b7e8-1108-42e8-8c78-b1653bb13d2e"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.165405 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "0bc2b7e8-1108-42e8-8c78-b1653bb13d2e" (UID: "0bc2b7e8-1108-42e8-8c78-b1653bb13d2e"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.233112 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-knw8b\" (UniqueName: \"kubernetes.io/projected/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-kube-api-access-knw8b\") on node \"crc\" DevicePath \"\"" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.233151 3561 reconciler_common.go:300] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-sasl-users\") on node \"crc\" DevicePath \"\"" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.233165 3561 reconciler_common.go:300] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.233175 3561 reconciler_common.go:300] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-sasl-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.233191 3561 reconciler_common.go:300] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.233202 3561 reconciler_common.go:300] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.233214 3561 reconciler_common.go:300] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.288827 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.288829 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" event={"ID":"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e","Type":"ContainerDied","Data":"a47d5adb571dcb524f5c81374ae3823b3ae3352fa6df9230d0c7a9e298285d02"} Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.288974 3561 scope.go:117] "RemoveContainer" containerID="a47d5adb571dcb524f5c81374ae3823b3ae3352fa6df9230d0c7a9e298285d02" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.289449 3561 generic.go:334] "Generic (PLEG): container finished" podID="0bc2b7e8-1108-42e8-8c78-b1653bb13d2e" containerID="a47d5adb571dcb524f5c81374ae3823b3ae3352fa6df9230d0c7a9e298285d02" exitCode=0 Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.290584 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-z7qbr" event={"ID":"0bc2b7e8-1108-42e8-8c78-b1653bb13d2e","Type":"ContainerDied","Data":"8dd0302b49f5a476beb3c67211c5a679968917acc6e8673697c1200badd5d66e"} Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.329751 3561 generic.go:334] "Generic (PLEG): container finished" podID="cef8c730-ff3a-47c2-841e-84e32db2bd53" containerID="0d09bb6dddafd03550e791de8ad5c3ef2ba485af7a66bf0a5791213552401b1d" exitCode=0 Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.329818 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" event={"ID":"cef8c730-ff3a-47c2-841e-84e32db2bd53","Type":"ContainerDied","Data":"0d09bb6dddafd03550e791de8ad5c3ef2ba485af7a66bf0a5791213552401b1d"} Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.330498 3561 scope.go:117] "RemoveContainer" containerID="0d09bb6dddafd03550e791de8ad5c3ef2ba485af7a66bf0a5791213552401b1d" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.337600 3561 scope.go:117] "RemoveContainer" containerID="a47d5adb571dcb524f5c81374ae3823b3ae3352fa6df9230d0c7a9e298285d02" Dec 03 00:37:05 crc kubenswrapper[3561]: E1203 00:37:05.338310 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a47d5adb571dcb524f5c81374ae3823b3ae3352fa6df9230d0c7a9e298285d02\": container with ID starting with a47d5adb571dcb524f5c81374ae3823b3ae3352fa6df9230d0c7a9e298285d02 not found: ID does not exist" containerID="a47d5adb571dcb524f5c81374ae3823b3ae3352fa6df9230d0c7a9e298285d02" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.338397 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a47d5adb571dcb524f5c81374ae3823b3ae3352fa6df9230d0c7a9e298285d02"} err="failed to get container status \"a47d5adb571dcb524f5c81374ae3823b3ae3352fa6df9230d0c7a9e298285d02\": rpc error: code = NotFound desc = could not find container \"a47d5adb571dcb524f5c81374ae3823b3ae3352fa6df9230d0c7a9e298285d02\": container with ID starting with a47d5adb571dcb524f5c81374ae3823b3ae3352fa6df9230d0c7a9e298285d02 not found: ID does not exist" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.343281 3561 generic.go:334] "Generic (PLEG): container finished" podID="47bafc37-8cfb-48a9-aa4d-4a486fae79df" containerID="93dcd29d6b6ee064612c1ebdc89ed7fb935f68214b7d50349844a6d74c6a0d39" exitCode=0 Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.343496 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" event={"ID":"47bafc37-8cfb-48a9-aa4d-4a486fae79df","Type":"ContainerDied","Data":"93dcd29d6b6ee064612c1ebdc89ed7fb935f68214b7d50349844a6d74c6a0d39"} Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.344290 3561 scope.go:117] "RemoveContainer" containerID="93dcd29d6b6ee064612c1ebdc89ed7fb935f68214b7d50349844a6d74c6a0d39" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.344805 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-z7qbr"] Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.359175 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-z7qbr"] Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.361968 3561 generic.go:334] "Generic (PLEG): container finished" podID="0253e9f5-0847-46ef-a8aa-b1282413e68a" containerID="ad7516bab490e15d0fd82fa182383c8cfe5d2f063bb835359bfe7e1943a19aef" exitCode=0 Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.362003 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" event={"ID":"0253e9f5-0847-46ef-a8aa-b1282413e68a","Type":"ContainerDied","Data":"ad7516bab490e15d0fd82fa182383c8cfe5d2f063bb835359bfe7e1943a19aef"} Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.362527 3561 scope.go:117] "RemoveContainer" containerID="ad7516bab490e15d0fd82fa182383c8cfe5d2f063bb835359bfe7e1943a19aef" Dec 03 00:37:05 crc kubenswrapper[3561]: I1203 00:37:05.671256 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc2b7e8-1108-42e8-8c78-b1653bb13d2e" path="/var/lib/kubelet/pods/0bc2b7e8-1108-42e8-8c78-b1653bb13d2e/volumes" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.370523 3561 generic.go:334] "Generic (PLEG): container finished" podID="8c361765-4740-4085-aff5-4504b5f660f6" containerID="ef636a7a9b0c63b9c3f8724fcb21a39e5c5dc50faf45cdd92edc45e894ebd2dd" exitCode=0 Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.370583 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" event={"ID":"8c361765-4740-4085-aff5-4504b5f660f6","Type":"ContainerDied","Data":"ef636a7a9b0c63b9c3f8724fcb21a39e5c5dc50faf45cdd92edc45e894ebd2dd"} Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.371349 3561 scope.go:117] "RemoveContainer" containerID="ef636a7a9b0c63b9c3f8724fcb21a39e5c5dc50faf45cdd92edc45e894ebd2dd" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.373093 3561 generic.go:334] "Generic (PLEG): container finished" podID="bb251e10-96d1-40d2-9124-da20277237f7" containerID="4135331c4c7f7dde7624805d25a17d237c32be9effebe0cc61d0b920a2b49536" exitCode=0 Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.373164 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" event={"ID":"bb251e10-96d1-40d2-9124-da20277237f7","Type":"ContainerDied","Data":"4135331c4c7f7dde7624805d25a17d237c32be9effebe0cc61d0b920a2b49536"} Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.373805 3561 scope.go:117] "RemoveContainer" containerID="4135331c4c7f7dde7624805d25a17d237c32be9effebe0cc61d0b920a2b49536" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.376779 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" event={"ID":"cef8c730-ff3a-47c2-841e-84e32db2bd53","Type":"ContainerStarted","Data":"4d5d21fa4baea70acb2e6f1326a23eb20c1c0655cb5abd21eb9bb6c584d4401a"} Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.380632 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" event={"ID":"47bafc37-8cfb-48a9-aa4d-4a486fae79df","Type":"ContainerStarted","Data":"4c4151a90ad395587a1f52e967c255b1500ad43943066e424858bff25a049f92"} Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.387816 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" event={"ID":"0253e9f5-0847-46ef-a8aa-b1282413e68a","Type":"ContainerStarted","Data":"ce99121e66d210bec55c8a8f93fd599dbac8073096a5b9d0873b3cf869e6e296"} Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.595477 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-lrnj5"] Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.595828 3561 topology_manager.go:215] "Topology Admit Handler" podUID="2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d" podNamespace="service-telemetry" podName="default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: E1203 00:37:06.595986 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0bc2b7e8-1108-42e8-8c78-b1653bb13d2e" containerName="default-interconnect" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.595998 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc2b7e8-1108-42e8-8c78-b1653bb13d2e" containerName="default-interconnect" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.596111 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc2b7e8-1108-42e8-8c78-b1653bb13d2e" containerName="default-interconnect" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.596508 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.599792 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-qxf4g" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.604193 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.604262 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.604432 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.604451 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.605188 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.607351 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.617928 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-lrnj5"] Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.656318 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-sasl-config\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.656381 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-sasl-users\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.656419 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw5wn\" (UniqueName: \"kubernetes.io/projected/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-kube-api-access-rw5wn\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.656613 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.656654 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.656676 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.656701 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.758073 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-sasl-config\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.758124 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-sasl-users\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.758156 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rw5wn\" (UniqueName: \"kubernetes.io/projected/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-kube-api-access-rw5wn\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.758255 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.758278 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.758300 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.758438 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.759922 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-sasl-config\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.763677 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.764804 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.765441 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.765996 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.766773 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-sasl-users\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.779867 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw5wn\" (UniqueName: \"kubernetes.io/projected/2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d-kube-api-access-rw5wn\") pod \"default-interconnect-84dbc59cb8-lrnj5\" (UID: \"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d\") " pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:06 crc kubenswrapper[3561]: I1203 00:37:06.932011 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" Dec 03 00:37:07 crc kubenswrapper[3561]: I1203 00:37:07.406756 3561 generic.go:334] "Generic (PLEG): container finished" podID="47bafc37-8cfb-48a9-aa4d-4a486fae79df" containerID="4c4151a90ad395587a1f52e967c255b1500ad43943066e424858bff25a049f92" exitCode=0 Dec 03 00:37:07 crc kubenswrapper[3561]: I1203 00:37:07.406876 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" event={"ID":"47bafc37-8cfb-48a9-aa4d-4a486fae79df","Type":"ContainerDied","Data":"4c4151a90ad395587a1f52e967c255b1500ad43943066e424858bff25a049f92"} Dec 03 00:37:07 crc kubenswrapper[3561]: I1203 00:37:07.406906 3561 scope.go:117] "RemoveContainer" containerID="93dcd29d6b6ee064612c1ebdc89ed7fb935f68214b7d50349844a6d74c6a0d39" Dec 03 00:37:07 crc kubenswrapper[3561]: I1203 00:37:07.407764 3561 scope.go:117] "RemoveContainer" containerID="4c4151a90ad395587a1f52e967c255b1500ad43943066e424858bff25a049f92" Dec 03 00:37:07 crc kubenswrapper[3561]: E1203 00:37:07.408725 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws_service-telemetry(47bafc37-8cfb-48a9-aa4d-4a486fae79df)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" podUID="47bafc37-8cfb-48a9-aa4d-4a486fae79df" Dec 03 00:37:07 crc kubenswrapper[3561]: I1203 00:37:07.436731 3561 generic.go:334] "Generic (PLEG): container finished" podID="0253e9f5-0847-46ef-a8aa-b1282413e68a" containerID="ce99121e66d210bec55c8a8f93fd599dbac8073096a5b9d0873b3cf869e6e296" exitCode=0 Dec 03 00:37:07 crc kubenswrapper[3561]: I1203 00:37:07.436821 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" event={"ID":"0253e9f5-0847-46ef-a8aa-b1282413e68a","Type":"ContainerDied","Data":"ce99121e66d210bec55c8a8f93fd599dbac8073096a5b9d0873b3cf869e6e296"} Dec 03 00:37:07 crc kubenswrapper[3561]: I1203 00:37:07.437371 3561 scope.go:117] "RemoveContainer" containerID="ce99121e66d210bec55c8a8f93fd599dbac8073096a5b9d0873b3cf869e6e296" Dec 03 00:37:07 crc kubenswrapper[3561]: E1203 00:37:07.437740 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl_service-telemetry(0253e9f5-0847-46ef-a8aa-b1282413e68a)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" podUID="0253e9f5-0847-46ef-a8aa-b1282413e68a" Dec 03 00:37:07 crc kubenswrapper[3561]: I1203 00:37:07.437903 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-lrnj5"] Dec 03 00:37:07 crc kubenswrapper[3561]: I1203 00:37:07.501903 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" event={"ID":"8c361765-4740-4085-aff5-4504b5f660f6","Type":"ContainerStarted","Data":"92db98b310bebed3edee4a367a8097d3cf52e1eb8f2925c29684592fec4abf9b"} Dec 03 00:37:07 crc kubenswrapper[3561]: I1203 00:37:07.506925 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" event={"ID":"bb251e10-96d1-40d2-9124-da20277237f7","Type":"ContainerStarted","Data":"07557dc46250b1b4a82f65e7e7ea9a326b401d09cc4a17df3f6d0ffd8273c69f"} Dec 03 00:37:07 crc kubenswrapper[3561]: I1203 00:37:07.509353 3561 scope.go:117] "RemoveContainer" containerID="ad7516bab490e15d0fd82fa182383c8cfe5d2f063bb835359bfe7e1943a19aef" Dec 03 00:37:07 crc kubenswrapper[3561]: I1203 00:37:07.522907 3561 generic.go:334] "Generic (PLEG): container finished" podID="cef8c730-ff3a-47c2-841e-84e32db2bd53" containerID="4d5d21fa4baea70acb2e6f1326a23eb20c1c0655cb5abd21eb9bb6c584d4401a" exitCode=0 Dec 03 00:37:07 crc kubenswrapper[3561]: I1203 00:37:07.522953 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" event={"ID":"cef8c730-ff3a-47c2-841e-84e32db2bd53","Type":"ContainerDied","Data":"4d5d21fa4baea70acb2e6f1326a23eb20c1c0655cb5abd21eb9bb6c584d4401a"} Dec 03 00:37:07 crc kubenswrapper[3561]: I1203 00:37:07.523764 3561 scope.go:117] "RemoveContainer" containerID="4d5d21fa4baea70acb2e6f1326a23eb20c1c0655cb5abd21eb9bb6c584d4401a" Dec 03 00:37:07 crc kubenswrapper[3561]: E1203 00:37:07.524180 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq_service-telemetry(cef8c730-ff3a-47c2-841e-84e32db2bd53)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" podUID="cef8c730-ff3a-47c2-841e-84e32db2bd53" Dec 03 00:37:07 crc kubenswrapper[3561]: I1203 00:37:07.654770 3561 scope.go:117] "RemoveContainer" containerID="0d09bb6dddafd03550e791de8ad5c3ef2ba485af7a66bf0a5791213552401b1d" Dec 03 00:37:07 crc kubenswrapper[3561]: I1203 00:37:07.667380 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:37:07 crc kubenswrapper[3561]: E1203 00:37:07.667857 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:37:08 crc kubenswrapper[3561]: I1203 00:37:08.532320 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" event={"ID":"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d","Type":"ContainerStarted","Data":"86e24f0b8e57e42acfa2fa2a690f09b102c62ca71fc6f304b48676b9fbd2334f"} Dec 03 00:37:08 crc kubenswrapper[3561]: I1203 00:37:08.534229 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" event={"ID":"2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d","Type":"ContainerStarted","Data":"0f11061aa54230064f411e20894fe8a3a75f5be721e6bfbe948ce017dc3d473c"} Dec 03 00:37:08 crc kubenswrapper[3561]: I1203 00:37:08.537553 3561 generic.go:334] "Generic (PLEG): container finished" podID="8c361765-4740-4085-aff5-4504b5f660f6" containerID="92db98b310bebed3edee4a367a8097d3cf52e1eb8f2925c29684592fec4abf9b" exitCode=0 Dec 03 00:37:08 crc kubenswrapper[3561]: I1203 00:37:08.537639 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" event={"ID":"8c361765-4740-4085-aff5-4504b5f660f6","Type":"ContainerDied","Data":"92db98b310bebed3edee4a367a8097d3cf52e1eb8f2925c29684592fec4abf9b"} Dec 03 00:37:08 crc kubenswrapper[3561]: I1203 00:37:08.538002 3561 scope.go:117] "RemoveContainer" containerID="ef636a7a9b0c63b9c3f8724fcb21a39e5c5dc50faf45cdd92edc45e894ebd2dd" Dec 03 00:37:08 crc kubenswrapper[3561]: I1203 00:37:08.538685 3561 scope.go:117] "RemoveContainer" containerID="92db98b310bebed3edee4a367a8097d3cf52e1eb8f2925c29684592fec4abf9b" Dec 03 00:37:08 crc kubenswrapper[3561]: E1203 00:37:08.539402 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9_service-telemetry(8c361765-4740-4085-aff5-4504b5f660f6)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" podUID="8c361765-4740-4085-aff5-4504b5f660f6" Dec 03 00:37:08 crc kubenswrapper[3561]: I1203 00:37:08.546997 3561 generic.go:334] "Generic (PLEG): container finished" podID="bb251e10-96d1-40d2-9124-da20277237f7" containerID="07557dc46250b1b4a82f65e7e7ea9a326b401d09cc4a17df3f6d0ffd8273c69f" exitCode=0 Dec 03 00:37:08 crc kubenswrapper[3561]: I1203 00:37:08.547106 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" event={"ID":"bb251e10-96d1-40d2-9124-da20277237f7","Type":"ContainerDied","Data":"07557dc46250b1b4a82f65e7e7ea9a326b401d09cc4a17df3f6d0ffd8273c69f"} Dec 03 00:37:08 crc kubenswrapper[3561]: I1203 00:37:08.547877 3561 scope.go:117] "RemoveContainer" containerID="07557dc46250b1b4a82f65e7e7ea9a326b401d09cc4a17df3f6d0ffd8273c69f" Dec 03 00:37:08 crc kubenswrapper[3561]: E1203 00:37:08.548324 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-68466754b4-br6cp_service-telemetry(bb251e10-96d1-40d2-9124-da20277237f7)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" podUID="bb251e10-96d1-40d2-9124-da20277237f7" Dec 03 00:37:08 crc kubenswrapper[3561]: I1203 00:37:08.559510 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-interconnect-84dbc59cb8-lrnj5" podStartSLOduration=4.559471488 podStartE2EDuration="4.559471488s" podCreationTimestamp="2025-12-03 00:37:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:37:08.553771301 +0000 UTC m=+1827.334205559" watchObservedRunningTime="2025-12-03 00:37:08.559471488 +0000 UTC m=+1827.339905746" Dec 03 00:37:08 crc kubenswrapper[3561]: I1203 00:37:08.728839 3561 scope.go:117] "RemoveContainer" containerID="4135331c4c7f7dde7624805d25a17d237c32be9effebe0cc61d0b920a2b49536" Dec 03 00:37:11 crc kubenswrapper[3561]: I1203 00:37:11.123894 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Dec 03 00:37:11 crc kubenswrapper[3561]: I1203 00:37:11.124254 3561 topology_manager.go:215] "Topology Admit Handler" podUID="f3ad3084-c800-4d29-affd-4951649cd87d" podNamespace="service-telemetry" podName="qdr-test" Dec 03 00:37:11 crc kubenswrapper[3561]: I1203 00:37:11.124923 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 03 00:37:11 crc kubenswrapper[3561]: I1203 00:37:11.128129 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Dec 03 00:37:11 crc kubenswrapper[3561]: I1203 00:37:11.128144 3561 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Dec 03 00:37:11 crc kubenswrapper[3561]: I1203 00:37:11.145594 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 03 00:37:11 crc kubenswrapper[3561]: I1203 00:37:11.220575 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/f3ad3084-c800-4d29-affd-4951649cd87d-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"f3ad3084-c800-4d29-affd-4951649cd87d\") " pod="service-telemetry/qdr-test" Dec 03 00:37:11 crc kubenswrapper[3561]: I1203 00:37:11.220743 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp7gq\" (UniqueName: \"kubernetes.io/projected/f3ad3084-c800-4d29-affd-4951649cd87d-kube-api-access-wp7gq\") pod \"qdr-test\" (UID: \"f3ad3084-c800-4d29-affd-4951649cd87d\") " pod="service-telemetry/qdr-test" Dec 03 00:37:11 crc kubenswrapper[3561]: I1203 00:37:11.220868 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/f3ad3084-c800-4d29-affd-4951649cd87d-qdr-test-config\") pod \"qdr-test\" (UID: \"f3ad3084-c800-4d29-affd-4951649cd87d\") " pod="service-telemetry/qdr-test" Dec 03 00:37:11 crc kubenswrapper[3561]: I1203 00:37:11.322395 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wp7gq\" (UniqueName: \"kubernetes.io/projected/f3ad3084-c800-4d29-affd-4951649cd87d-kube-api-access-wp7gq\") pod \"qdr-test\" (UID: \"f3ad3084-c800-4d29-affd-4951649cd87d\") " pod="service-telemetry/qdr-test" Dec 03 00:37:11 crc kubenswrapper[3561]: I1203 00:37:11.322455 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/f3ad3084-c800-4d29-affd-4951649cd87d-qdr-test-config\") pod \"qdr-test\" (UID: \"f3ad3084-c800-4d29-affd-4951649cd87d\") " pod="service-telemetry/qdr-test" Dec 03 00:37:11 crc kubenswrapper[3561]: I1203 00:37:11.322508 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/f3ad3084-c800-4d29-affd-4951649cd87d-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"f3ad3084-c800-4d29-affd-4951649cd87d\") " pod="service-telemetry/qdr-test" Dec 03 00:37:11 crc kubenswrapper[3561]: I1203 00:37:11.323413 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/f3ad3084-c800-4d29-affd-4951649cd87d-qdr-test-config\") pod \"qdr-test\" (UID: \"f3ad3084-c800-4d29-affd-4951649cd87d\") " pod="service-telemetry/qdr-test" Dec 03 00:37:11 crc kubenswrapper[3561]: I1203 00:37:11.329137 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/f3ad3084-c800-4d29-affd-4951649cd87d-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"f3ad3084-c800-4d29-affd-4951649cd87d\") " pod="service-telemetry/qdr-test" Dec 03 00:37:11 crc kubenswrapper[3561]: I1203 00:37:11.345184 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp7gq\" (UniqueName: \"kubernetes.io/projected/f3ad3084-c800-4d29-affd-4951649cd87d-kube-api-access-wp7gq\") pod \"qdr-test\" (UID: \"f3ad3084-c800-4d29-affd-4951649cd87d\") " pod="service-telemetry/qdr-test" Dec 03 00:37:11 crc kubenswrapper[3561]: I1203 00:37:11.451430 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 03 00:37:11 crc kubenswrapper[3561]: I1203 00:37:11.727811 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 03 00:37:12 crc kubenswrapper[3561]: I1203 00:37:12.591673 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"f3ad3084-c800-4d29-affd-4951649cd87d","Type":"ContainerStarted","Data":"262058647b1e83bb39339096ecebbe9d637548434d93743346f259a011e62984"} Dec 03 00:37:17 crc kubenswrapper[3561]: I1203 00:37:17.664571 3561 scope.go:117] "RemoveContainer" containerID="4c4151a90ad395587a1f52e967c255b1500ad43943066e424858bff25a049f92" Dec 03 00:37:18 crc kubenswrapper[3561]: I1203 00:37:18.665157 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:37:18 crc kubenswrapper[3561]: E1203 00:37:18.665845 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:37:18 crc kubenswrapper[3561]: I1203 00:37:18.666030 3561 scope.go:117] "RemoveContainer" containerID="07557dc46250b1b4a82f65e7e7ea9a326b401d09cc4a17df3f6d0ffd8273c69f" Dec 03 00:37:19 crc kubenswrapper[3561]: I1203 00:37:19.666740 3561 scope.go:117] "RemoveContainer" containerID="ce99121e66d210bec55c8a8f93fd599dbac8073096a5b9d0873b3cf869e6e296" Dec 03 00:37:21 crc kubenswrapper[3561]: I1203 00:37:21.669911 3561 scope.go:117] "RemoveContainer" containerID="4d5d21fa4baea70acb2e6f1326a23eb20c1c0655cb5abd21eb9bb6c584d4401a" Dec 03 00:37:22 crc kubenswrapper[3561]: I1203 00:37:22.664904 3561 scope.go:117] "RemoveContainer" containerID="92db98b310bebed3edee4a367a8097d3cf52e1eb8f2925c29684592fec4abf9b" Dec 03 00:37:23 crc kubenswrapper[3561]: I1203 00:37:23.699920 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"f3ad3084-c800-4d29-affd-4951649cd87d","Type":"ContainerStarted","Data":"632ed924ded641c1707d902fe4fcbc36a8f62da759437da0b84032cae15d4054"} Dec 03 00:37:23 crc kubenswrapper[3561]: I1203 00:37:23.702843 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9" event={"ID":"8c361765-4740-4085-aff5-4504b5f660f6","Type":"ContainerStarted","Data":"5927a31992e4755cd993090da06d43554414cf2f4a19924355d793c95d807ad7"} Dec 03 00:37:23 crc kubenswrapper[3561]: I1203 00:37:23.706426 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-68466754b4-br6cp" event={"ID":"bb251e10-96d1-40d2-9124-da20277237f7","Type":"ContainerStarted","Data":"53034015775f489ccbc8987d014368d06861812a7b671d288bab1d9f4cfab05d"} Dec 03 00:37:23 crc kubenswrapper[3561]: I1203 00:37:23.711985 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq" event={"ID":"cef8c730-ff3a-47c2-841e-84e32db2bd53","Type":"ContainerStarted","Data":"003c9dc707ca4ff68cc7090a2923400f792fc7c098c2cbf63c26ab6f50ee086a"} Dec 03 00:37:23 crc kubenswrapper[3561]: I1203 00:37:23.715896 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws" event={"ID":"47bafc37-8cfb-48a9-aa4d-4a486fae79df","Type":"ContainerStarted","Data":"b9dd7bfd4885e305a73c978414c64a26f87c7dedd9bc8aca516ce003472f5449"} Dec 03 00:37:23 crc kubenswrapper[3561]: I1203 00:37:23.721934 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl" event={"ID":"0253e9f5-0847-46ef-a8aa-b1282413e68a","Type":"ContainerStarted","Data":"518807643fbd21dc841b6b5e766dc733c85791893693a73cca95101416439f25"} Dec 03 00:37:23 crc kubenswrapper[3561]: I1203 00:37:23.727429 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.680791448 podStartE2EDuration="12.727387352s" podCreationTimestamp="2025-12-03 00:37:11 +0000 UTC" firstStartedPulling="2025-12-03 00:37:11.749035138 +0000 UTC m=+1830.529469396" lastFinishedPulling="2025-12-03 00:37:22.795631042 +0000 UTC m=+1841.576065300" observedRunningTime="2025-12-03 00:37:23.725384019 +0000 UTC m=+1842.505818287" watchObservedRunningTime="2025-12-03 00:37:23.727387352 +0000 UTC m=+1842.507821620" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.052376 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hl7zx"] Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.052493 3561 topology_manager.go:215] "Topology Admit Handler" podUID="061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c" podNamespace="service-telemetry" podName="stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.053407 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.055234 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.055281 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.056608 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.056817 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.056936 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.057035 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.061507 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hl7zx"] Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.242724 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.242784 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-healthcheck-log\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.242823 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.243036 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.243175 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsstt\" (UniqueName: \"kubernetes.io/projected/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-kube-api-access-qsstt\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.243218 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-sensubility-config\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.243250 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-collectd-config\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.344018 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qsstt\" (UniqueName: \"kubernetes.io/projected/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-kube-api-access-qsstt\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.344379 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-sensubility-config\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.345293 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-sensubility-config\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.345341 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-collectd-config\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.345381 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.345429 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-collectd-config\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.345414 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-healthcheck-log\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.346154 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-healthcheck-log\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.346109 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.346234 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.346814 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.346879 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.347458 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.363318 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsstt\" (UniqueName: \"kubernetes.io/projected/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-kube-api-access-qsstt\") pod \"stf-smoketest-smoke1-hl7zx\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.371790 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.419353 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.419515 3561 topology_manager.go:215] "Topology Admit Handler" podUID="e3d5a8dd-660e-4284-af8b-5b0e5c9ee863" podNamespace="service-telemetry" podName="curl" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.420349 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.428997 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.448287 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jkp2\" (UniqueName: \"kubernetes.io/projected/e3d5a8dd-660e-4284-af8b-5b0e5c9ee863-kube-api-access-5jkp2\") pod \"curl\" (UID: \"e3d5a8dd-660e-4284-af8b-5b0e5c9ee863\") " pod="service-telemetry/curl" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.549416 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5jkp2\" (UniqueName: \"kubernetes.io/projected/e3d5a8dd-660e-4284-af8b-5b0e5c9ee863-kube-api-access-5jkp2\") pod \"curl\" (UID: \"e3d5a8dd-660e-4284-af8b-5b0e5c9ee863\") " pod="service-telemetry/curl" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.674733 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jkp2\" (UniqueName: \"kubernetes.io/projected/e3d5a8dd-660e-4284-af8b-5b0e5c9ee863-kube-api-access-5jkp2\") pod \"curl\" (UID: \"e3d5a8dd-660e-4284-af8b-5b0e5c9ee863\") " pod="service-telemetry/curl" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.736651 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 03 00:37:24 crc kubenswrapper[3561]: I1203 00:37:24.757784 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hl7zx"] Dec 03 00:37:24 crc kubenswrapper[3561]: W1203 00:37:24.773211 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod061f78a7_c0e1_4cfc_b1e9_2ac929e05b9c.slice/crio-17134901bcf6749469ae6a008c52af5189b4749c06ad69aa461437aac8ea6084 WatchSource:0}: Error finding container 17134901bcf6749469ae6a008c52af5189b4749c06ad69aa461437aac8ea6084: Status 404 returned error can't find the container with id 17134901bcf6749469ae6a008c52af5189b4749c06ad69aa461437aac8ea6084 Dec 03 00:37:25 crc kubenswrapper[3561]: I1203 00:37:25.169153 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 03 00:37:25 crc kubenswrapper[3561]: W1203 00:37:25.175086 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3d5a8dd_660e_4284_af8b_5b0e5c9ee863.slice/crio-a8f4abb6b8caa8271b8ebaea3e43c943df0f2efe72a6170c0caebdb17109203f WatchSource:0}: Error finding container a8f4abb6b8caa8271b8ebaea3e43c943df0f2efe72a6170c0caebdb17109203f: Status 404 returned error can't find the container with id a8f4abb6b8caa8271b8ebaea3e43c943df0f2efe72a6170c0caebdb17109203f Dec 03 00:37:25 crc kubenswrapper[3561]: I1203 00:37:25.736959 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"e3d5a8dd-660e-4284-af8b-5b0e5c9ee863","Type":"ContainerStarted","Data":"a8f4abb6b8caa8271b8ebaea3e43c943df0f2efe72a6170c0caebdb17109203f"} Dec 03 00:37:25 crc kubenswrapper[3561]: I1203 00:37:25.738168 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hl7zx" event={"ID":"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c","Type":"ContainerStarted","Data":"17134901bcf6749469ae6a008c52af5189b4749c06ad69aa461437aac8ea6084"} Dec 03 00:37:28 crc kubenswrapper[3561]: I1203 00:37:28.806283 3561 generic.go:334] "Generic (PLEG): container finished" podID="e3d5a8dd-660e-4284-af8b-5b0e5c9ee863" containerID="34ce7a220c9886d1151aad0bba295aa2be3bff620c704a17c601bda1b2811ba5" exitCode=0 Dec 03 00:37:28 crc kubenswrapper[3561]: I1203 00:37:28.806622 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"e3d5a8dd-660e-4284-af8b-5b0e5c9ee863","Type":"ContainerDied","Data":"34ce7a220c9886d1151aad0bba295aa2be3bff620c704a17c601bda1b2811ba5"} Dec 03 00:37:32 crc kubenswrapper[3561]: I1203 00:37:32.664150 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:37:32 crc kubenswrapper[3561]: E1203 00:37:32.664891 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:37:38 crc kubenswrapper[3561]: I1203 00:37:38.432648 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 03 00:37:38 crc kubenswrapper[3561]: I1203 00:37:38.558950 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jkp2\" (UniqueName: \"kubernetes.io/projected/e3d5a8dd-660e-4284-af8b-5b0e5c9ee863-kube-api-access-5jkp2\") pod \"e3d5a8dd-660e-4284-af8b-5b0e5c9ee863\" (UID: \"e3d5a8dd-660e-4284-af8b-5b0e5c9ee863\") " Dec 03 00:37:38 crc kubenswrapper[3561]: I1203 00:37:38.563102 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d5a8dd-660e-4284-af8b-5b0e5c9ee863-kube-api-access-5jkp2" (OuterVolumeSpecName: "kube-api-access-5jkp2") pod "e3d5a8dd-660e-4284-af8b-5b0e5c9ee863" (UID: "e3d5a8dd-660e-4284-af8b-5b0e5c9ee863"). InnerVolumeSpecName "kube-api-access-5jkp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:37:38 crc kubenswrapper[3561]: I1203 00:37:38.660909 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5jkp2\" (UniqueName: \"kubernetes.io/projected/e3d5a8dd-660e-4284-af8b-5b0e5c9ee863-kube-api-access-5jkp2\") on node \"crc\" DevicePath \"\"" Dec 03 00:37:38 crc kubenswrapper[3561]: I1203 00:37:38.716873 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_e3d5a8dd-660e-4284-af8b-5b0e5c9ee863/curl/0.log" Dec 03 00:37:38 crc kubenswrapper[3561]: I1203 00:37:38.906030 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"e3d5a8dd-660e-4284-af8b-5b0e5c9ee863","Type":"ContainerDied","Data":"a8f4abb6b8caa8271b8ebaea3e43c943df0f2efe72a6170c0caebdb17109203f"} Dec 03 00:37:38 crc kubenswrapper[3561]: I1203 00:37:38.906087 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8f4abb6b8caa8271b8ebaea3e43c943df0f2efe72a6170c0caebdb17109203f" Dec 03 00:37:38 crc kubenswrapper[3561]: I1203 00:37:38.906135 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 03 00:37:39 crc kubenswrapper[3561]: I1203 00:37:39.021602 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6755fc87b7-lptzn_b2b2911c-9415-4aac-a85d-b29266af55c2/prometheus-webhook-snmp/0.log" Dec 03 00:37:39 crc kubenswrapper[3561]: I1203 00:37:39.932073 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hl7zx" event={"ID":"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c","Type":"ContainerStarted","Data":"6e0b0fb8f3fdcd59c4b2b36ffaf7ad87e1aedc525d575b9a1bbdbc969998076f"} Dec 03 00:37:41 crc kubenswrapper[3561]: I1203 00:37:41.588230 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:37:41 crc kubenswrapper[3561]: I1203 00:37:41.588316 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:37:41 crc kubenswrapper[3561]: I1203 00:37:41.588362 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:37:41 crc kubenswrapper[3561]: I1203 00:37:41.588407 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:37:41 crc kubenswrapper[3561]: I1203 00:37:41.588434 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:37:44 crc kubenswrapper[3561]: I1203 00:37:44.664906 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:37:44 crc kubenswrapper[3561]: E1203 00:37:44.665711 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:37:45 crc kubenswrapper[3561]: I1203 00:37:45.972211 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hl7zx" event={"ID":"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c","Type":"ContainerStarted","Data":"dfd61fc9498e8c9ea85987aa0b0d25e0d04e18572a3032578496ee699e7393ff"} Dec 03 00:37:55 crc kubenswrapper[3561]: I1203 00:37:55.665212 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:37:55 crc kubenswrapper[3561]: E1203 00:37:55.666307 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:38:09 crc kubenswrapper[3561]: I1203 00:38:09.162564 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6755fc87b7-lptzn_b2b2911c-9415-4aac-a85d-b29266af55c2/prometheus-webhook-snmp/0.log" Dec 03 00:38:10 crc kubenswrapper[3561]: I1203 00:38:10.671264 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:38:10 crc kubenswrapper[3561]: E1203 00:38:10.672117 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:38:14 crc kubenswrapper[3561]: I1203 00:38:14.150240 3561 generic.go:334] "Generic (PLEG): container finished" podID="061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c" containerID="6e0b0fb8f3fdcd59c4b2b36ffaf7ad87e1aedc525d575b9a1bbdbc969998076f" exitCode=0 Dec 03 00:38:14 crc kubenswrapper[3561]: I1203 00:38:14.150283 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hl7zx" event={"ID":"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c","Type":"ContainerDied","Data":"6e0b0fb8f3fdcd59c4b2b36ffaf7ad87e1aedc525d575b9a1bbdbc969998076f"} Dec 03 00:38:14 crc kubenswrapper[3561]: I1203 00:38:14.150933 3561 scope.go:117] "RemoveContainer" containerID="6e0b0fb8f3fdcd59c4b2b36ffaf7ad87e1aedc525d575b9a1bbdbc969998076f" Dec 03 00:38:18 crc kubenswrapper[3561]: I1203 00:38:18.192978 3561 generic.go:334] "Generic (PLEG): container finished" podID="061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c" containerID="dfd61fc9498e8c9ea85987aa0b0d25e0d04e18572a3032578496ee699e7393ff" exitCode=0 Dec 03 00:38:18 crc kubenswrapper[3561]: I1203 00:38:18.194409 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hl7zx" event={"ID":"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c","Type":"ContainerDied","Data":"dfd61fc9498e8c9ea85987aa0b0d25e0d04e18572a3032578496ee699e7393ff"} Dec 03 00:38:19 crc kubenswrapper[3561]: I1203 00:38:19.912686 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.033391 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-sensubility-config\") pod \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.033458 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-collectd-entrypoint-script\") pod \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.033565 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-ceilometer-entrypoint-script\") pod \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.033670 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-healthcheck-log\") pod \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.033724 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-collectd-config\") pod \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.033751 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-ceilometer-publisher\") pod \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.033777 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsstt\" (UniqueName: \"kubernetes.io/projected/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-kube-api-access-qsstt\") pod \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\" (UID: \"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c\") " Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.040829 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-kube-api-access-qsstt" (OuterVolumeSpecName: "kube-api-access-qsstt") pod "061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c" (UID: "061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c"). InnerVolumeSpecName "kube-api-access-qsstt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.052440 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c" (UID: "061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.058152 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c" (UID: "061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.058417 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c" (UID: "061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.058586 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c" (UID: "061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.059754 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c" (UID: "061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.094275 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c" (UID: "061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.135762 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qsstt\" (UniqueName: \"kubernetes.io/projected/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-kube-api-access-qsstt\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.135802 3561 reconciler_common.go:300] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-collectd-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.135813 3561 reconciler_common.go:300] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.135823 3561 reconciler_common.go:300] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-sensubility-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.135834 3561 reconciler_common.go:300] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.135844 3561 reconciler_common.go:300] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.135854 3561 reconciler_common.go:300] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c-healthcheck-log\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.209596 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hl7zx" event={"ID":"061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c","Type":"ContainerDied","Data":"17134901bcf6749469ae6a008c52af5189b4749c06ad69aa461437aac8ea6084"} Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.209642 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17134901bcf6749469ae6a008c52af5189b4749c06ad69aa461437aac8ea6084" Dec 03 00:38:20 crc kubenswrapper[3561]: I1203 00:38:20.209682 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hl7zx" Dec 03 00:38:21 crc kubenswrapper[3561]: I1203 00:38:21.935093 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-hl7zx_061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c/smoketest-collectd/0.log" Dec 03 00:38:22 crc kubenswrapper[3561]: I1203 00:38:22.271879 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-hl7zx_061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c/smoketest-ceilometer/0.log" Dec 03 00:38:22 crc kubenswrapper[3561]: I1203 00:38:22.608577 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-84dbc59cb8-lrnj5_2f6ead95-e0a9-49ae-ba8c-f549eaa77a2d/default-interconnect/0.log" Dec 03 00:38:23 crc kubenswrapper[3561]: I1203 00:38:23.019698 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws_47bafc37-8cfb-48a9-aa4d-4a486fae79df/bridge/2.log" Dec 03 00:38:23 crc kubenswrapper[3561]: I1203 00:38:23.264747 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-68d7cdf9d4-q8xws_47bafc37-8cfb-48a9-aa4d-4a486fae79df/sg-core/0.log" Dec 03 00:38:23 crc kubenswrapper[3561]: I1203 00:38:23.606700 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl_0253e9f5-0847-46ef-a8aa-b1282413e68a/bridge/2.log" Dec 03 00:38:23 crc kubenswrapper[3561]: I1203 00:38:23.920490 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-7bb7744557-mzqpl_0253e9f5-0847-46ef-a8aa-b1282413e68a/sg-core/0.log" Dec 03 00:38:24 crc kubenswrapper[3561]: I1203 00:38:24.222993 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq_cef8c730-ff3a-47c2-841e-84e32db2bd53/bridge/2.log" Dec 03 00:38:24 crc kubenswrapper[3561]: I1203 00:38:24.558390 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-7866965967-9qvfq_cef8c730-ff3a-47c2-841e-84e32db2bd53/sg-core/0.log" Dec 03 00:38:24 crc kubenswrapper[3561]: I1203 00:38:24.923997 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-68466754b4-br6cp_bb251e10-96d1-40d2-9124-da20277237f7/bridge/2.log" Dec 03 00:38:25 crc kubenswrapper[3561]: I1203 00:38:25.263967 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-68466754b4-br6cp_bb251e10-96d1-40d2-9124-da20277237f7/sg-core/0.log" Dec 03 00:38:25 crc kubenswrapper[3561]: I1203 00:38:25.583050 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9_8c361765-4740-4085-aff5-4504b5f660f6/bridge/2.log" Dec 03 00:38:25 crc kubenswrapper[3561]: I1203 00:38:25.664560 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:38:25 crc kubenswrapper[3561]: E1203 00:38:25.665148 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:38:25 crc kubenswrapper[3561]: I1203 00:38:25.971243 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-c774c44f7-hg4d9_8c361765-4740-4085-aff5-4504b5f660f6/sg-core/0.log" Dec 03 00:38:28 crc kubenswrapper[3561]: I1203 00:38:28.296311 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-5488c6f949-2ndv8_91f69ebe-c562-473e-9aa7-959b35cccc55/operator/0.log" Dec 03 00:38:28 crc kubenswrapper[3561]: I1203 00:38:28.583947 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_782f2197-f3e1-4ea1-988f-7acbd394c9e8/prometheus/0.log" Dec 03 00:38:28 crc kubenswrapper[3561]: I1203 00:38:28.891484 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_da255b5b-e06c-4925-84bc-ed08b801a8b5/elasticsearch/0.log" Dec 03 00:38:29 crc kubenswrapper[3561]: I1203 00:38:29.220696 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6755fc87b7-lptzn_b2b2911c-9415-4aac-a85d-b29266af55c2/prometheus-webhook-snmp/0.log" Dec 03 00:38:29 crc kubenswrapper[3561]: I1203 00:38:29.537377 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_8883bba5-6555-4f12-9e01-5e4fc4712a25/alertmanager/0.log" Dec 03 00:38:37 crc kubenswrapper[3561]: I1203 00:38:37.664928 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:38:37 crc kubenswrapper[3561]: E1203 00:38:37.666040 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:38:41 crc kubenswrapper[3561]: I1203 00:38:41.590213 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:38:41 crc kubenswrapper[3561]: I1203 00:38:41.590598 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:38:41 crc kubenswrapper[3561]: I1203 00:38:41.590681 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:38:41 crc kubenswrapper[3561]: I1203 00:38:41.590734 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:38:41 crc kubenswrapper[3561]: I1203 00:38:41.590777 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:38:45 crc kubenswrapper[3561]: I1203 00:38:45.856108 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-6755fcb848-kqnxv_2618f9bf-b6a4-4371-8f7c-685c9682054a/operator/0.log" Dec 03 00:38:48 crc kubenswrapper[3561]: I1203 00:38:48.095120 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-5488c6f949-2ndv8_91f69ebe-c562-473e-9aa7-959b35cccc55/operator/0.log" Dec 03 00:38:48 crc kubenswrapper[3561]: I1203 00:38:48.367984 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_f3ad3084-c800-4d29-affd-4951649cd87d/qdr/0.log" Dec 03 00:38:48 crc kubenswrapper[3561]: I1203 00:38:48.664224 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:38:48 crc kubenswrapper[3561]: E1203 00:38:48.664988 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:38:51 crc kubenswrapper[3561]: I1203 00:38:51.171445 3561 scope.go:117] "RemoveContainer" containerID="49e983dafb5f51b8777452a1dfe96b15adef931815762ab166e94c56378ef69c" Dec 03 00:38:51 crc kubenswrapper[3561]: I1203 00:38:51.228349 3561 scope.go:117] "RemoveContainer" containerID="71927c3c8b8941f69e8b72d9bd715129975c1476ae34fed28dd627be071ac998" Dec 03 00:39:00 crc kubenswrapper[3561]: I1203 00:39:00.664958 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:39:00 crc kubenswrapper[3561]: E1203 00:39:00.666910 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 03 00:39:15 crc kubenswrapper[3561]: I1203 00:39:15.665349 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:39:16 crc kubenswrapper[3561]: I1203 00:39:16.748099 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"95e8b59f3f10a4f469fb6c0f2480e9f9ae30881de6c52f1b19a8f907c17bdc12"} Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.460533 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t82n8/must-gather-rl25q"] Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.461047 3561 topology_manager.go:215] "Topology Admit Handler" podUID="6d693d29-f2f8-42aa-b44f-fd3e42e6446b" podNamespace="openshift-must-gather-t82n8" podName="must-gather-rl25q" Dec 03 00:39:24 crc kubenswrapper[3561]: E1203 00:39:24.461227 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c" containerName="smoketest-collectd" Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.461240 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c" containerName="smoketest-collectd" Dec 03 00:39:24 crc kubenswrapper[3561]: E1203 00:39:24.461258 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="e3d5a8dd-660e-4284-af8b-5b0e5c9ee863" containerName="curl" Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.461264 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d5a8dd-660e-4284-af8b-5b0e5c9ee863" containerName="curl" Dec 03 00:39:24 crc kubenswrapper[3561]: E1203 00:39:24.461273 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c" containerName="smoketest-ceilometer" Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.461279 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c" containerName="smoketest-ceilometer" Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.461406 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c" containerName="smoketest-ceilometer" Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.461422 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d5a8dd-660e-4284-af8b-5b0e5c9ee863" containerName="curl" Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.461433 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="061f78a7-c0e1-4cfc-b1e9-2ac929e05b9c" containerName="smoketest-collectd" Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.462110 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t82n8/must-gather-rl25q" Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.464432 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-t82n8"/"openshift-service-ca.crt" Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.464526 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-must-gather-t82n8"/"default-dockercfg-w92cc" Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.466890 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-t82n8"/"kube-root-ca.crt" Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.476654 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-t82n8/must-gather-rl25q"] Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.615584 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbnbc\" (UniqueName: \"kubernetes.io/projected/6d693d29-f2f8-42aa-b44f-fd3e42e6446b-kube-api-access-wbnbc\") pod \"must-gather-rl25q\" (UID: \"6d693d29-f2f8-42aa-b44f-fd3e42e6446b\") " pod="openshift-must-gather-t82n8/must-gather-rl25q" Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.615725 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6d693d29-f2f8-42aa-b44f-fd3e42e6446b-must-gather-output\") pod \"must-gather-rl25q\" (UID: \"6d693d29-f2f8-42aa-b44f-fd3e42e6446b\") " pod="openshift-must-gather-t82n8/must-gather-rl25q" Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.716210 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wbnbc\" (UniqueName: \"kubernetes.io/projected/6d693d29-f2f8-42aa-b44f-fd3e42e6446b-kube-api-access-wbnbc\") pod \"must-gather-rl25q\" (UID: \"6d693d29-f2f8-42aa-b44f-fd3e42e6446b\") " pod="openshift-must-gather-t82n8/must-gather-rl25q" Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.716311 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6d693d29-f2f8-42aa-b44f-fd3e42e6446b-must-gather-output\") pod \"must-gather-rl25q\" (UID: \"6d693d29-f2f8-42aa-b44f-fd3e42e6446b\") " pod="openshift-must-gather-t82n8/must-gather-rl25q" Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.716760 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6d693d29-f2f8-42aa-b44f-fd3e42e6446b-must-gather-output\") pod \"must-gather-rl25q\" (UID: \"6d693d29-f2f8-42aa-b44f-fd3e42e6446b\") " pod="openshift-must-gather-t82n8/must-gather-rl25q" Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.741448 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbnbc\" (UniqueName: \"kubernetes.io/projected/6d693d29-f2f8-42aa-b44f-fd3e42e6446b-kube-api-access-wbnbc\") pod \"must-gather-rl25q\" (UID: \"6d693d29-f2f8-42aa-b44f-fd3e42e6446b\") " pod="openshift-must-gather-t82n8/must-gather-rl25q" Dec 03 00:39:24 crc kubenswrapper[3561]: I1203 00:39:24.779784 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t82n8/must-gather-rl25q" Dec 03 00:39:25 crc kubenswrapper[3561]: I1203 00:39:25.062872 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-t82n8/must-gather-rl25q"] Dec 03 00:39:25 crc kubenswrapper[3561]: I1203 00:39:25.833280 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t82n8/must-gather-rl25q" event={"ID":"6d693d29-f2f8-42aa-b44f-fd3e42e6446b","Type":"ContainerStarted","Data":"530a96b09907f69506ad615c518bacbf994550a63a2af73bfb9b1c03a9ac17f3"} Dec 03 00:39:33 crc kubenswrapper[3561]: I1203 00:39:33.898915 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t82n8/must-gather-rl25q" event={"ID":"6d693d29-f2f8-42aa-b44f-fd3e42e6446b","Type":"ContainerStarted","Data":"c94d5f5f9b4cda49bf3f11b6c2bff888b2cbb2215db035e6bfd4cf78ef30e407"} Dec 03 00:39:33 crc kubenswrapper[3561]: I1203 00:39:33.899526 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t82n8/must-gather-rl25q" event={"ID":"6d693d29-f2f8-42aa-b44f-fd3e42e6446b","Type":"ContainerStarted","Data":"673f525670714efeaaa0c5c1afd0f3a7cb5ffabe2e484f0880ed96bafdb429f5"} Dec 03 00:39:33 crc kubenswrapper[3561]: I1203 00:39:33.916228 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-must-gather-t82n8/must-gather-rl25q" podStartSLOduration=2.260848673 podStartE2EDuration="9.916193811s" podCreationTimestamp="2025-12-03 00:39:24 +0000 UTC" firstStartedPulling="2025-12-03 00:39:25.070924547 +0000 UTC m=+1963.851358815" lastFinishedPulling="2025-12-03 00:39:32.726269695 +0000 UTC m=+1971.506703953" observedRunningTime="2025-12-03 00:39:33.915950194 +0000 UTC m=+1972.696384442" watchObservedRunningTime="2025-12-03 00:39:33.916193811 +0000 UTC m=+1972.696628079" Dec 03 00:39:41 crc kubenswrapper[3561]: I1203 00:39:41.591069 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:39:41 crc kubenswrapper[3561]: I1203 00:39:41.591769 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:39:41 crc kubenswrapper[3561]: I1203 00:39:41.591863 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:39:41 crc kubenswrapper[3561]: I1203 00:39:41.591896 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:39:41 crc kubenswrapper[3561]: I1203 00:39:41.591957 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:39:51 crc kubenswrapper[3561]: I1203 00:39:51.296105 3561 scope.go:117] "RemoveContainer" containerID="73d94a1838920186ca3b8670cd68181d8b18a4b842d3c692cef1c2e917c56b1a" Dec 03 00:39:51 crc kubenswrapper[3561]: I1203 00:39:51.345269 3561 scope.go:117] "RemoveContainer" containerID="924196d0d9d33c50ca04c59bd3289a354d058628356a3fa44bd052cd9901f8d2" Dec 03 00:40:23 crc kubenswrapper[3561]: I1203 00:40:23.760412 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/3.log" Dec 03 00:40:23 crc kubenswrapper[3561]: I1203 00:40:23.856906 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/1.log" Dec 03 00:40:23 crc kubenswrapper[3561]: I1203 00:40:23.944817 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/2.log" Dec 03 00:40:31 crc kubenswrapper[3561]: I1203 00:40:31.900429 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t7fqv"] Dec 03 00:40:31 crc kubenswrapper[3561]: I1203 00:40:31.901159 3561 topology_manager.go:215] "Topology Admit Handler" podUID="a2d6c98d-4417-44de-b985-65fb764a68dd" podNamespace="openshift-marketplace" podName="redhat-operators-t7fqv" Dec 03 00:40:31 crc kubenswrapper[3561]: I1203 00:40:31.902862 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7fqv" Dec 03 00:40:31 crc kubenswrapper[3561]: I1203 00:40:31.910089 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t7fqv"] Dec 03 00:40:31 crc kubenswrapper[3561]: I1203 00:40:31.967628 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2d6c98d-4417-44de-b985-65fb764a68dd-utilities\") pod \"redhat-operators-t7fqv\" (UID: \"a2d6c98d-4417-44de-b985-65fb764a68dd\") " pod="openshift-marketplace/redhat-operators-t7fqv" Dec 03 00:40:31 crc kubenswrapper[3561]: I1203 00:40:31.967900 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lf4p\" (UniqueName: \"kubernetes.io/projected/a2d6c98d-4417-44de-b985-65fb764a68dd-kube-api-access-7lf4p\") pod \"redhat-operators-t7fqv\" (UID: \"a2d6c98d-4417-44de-b985-65fb764a68dd\") " pod="openshift-marketplace/redhat-operators-t7fqv" Dec 03 00:40:31 crc kubenswrapper[3561]: I1203 00:40:31.967928 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2d6c98d-4417-44de-b985-65fb764a68dd-catalog-content\") pod \"redhat-operators-t7fqv\" (UID: \"a2d6c98d-4417-44de-b985-65fb764a68dd\") " pod="openshift-marketplace/redhat-operators-t7fqv" Dec 03 00:40:32 crc kubenswrapper[3561]: I1203 00:40:32.069228 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2d6c98d-4417-44de-b985-65fb764a68dd-utilities\") pod \"redhat-operators-t7fqv\" (UID: \"a2d6c98d-4417-44de-b985-65fb764a68dd\") " pod="openshift-marketplace/redhat-operators-t7fqv" Dec 03 00:40:32 crc kubenswrapper[3561]: I1203 00:40:32.069290 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7lf4p\" (UniqueName: \"kubernetes.io/projected/a2d6c98d-4417-44de-b985-65fb764a68dd-kube-api-access-7lf4p\") pod \"redhat-operators-t7fqv\" (UID: \"a2d6c98d-4417-44de-b985-65fb764a68dd\") " pod="openshift-marketplace/redhat-operators-t7fqv" Dec 03 00:40:32 crc kubenswrapper[3561]: I1203 00:40:32.069325 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2d6c98d-4417-44de-b985-65fb764a68dd-catalog-content\") pod \"redhat-operators-t7fqv\" (UID: \"a2d6c98d-4417-44de-b985-65fb764a68dd\") " pod="openshift-marketplace/redhat-operators-t7fqv" Dec 03 00:40:32 crc kubenswrapper[3561]: I1203 00:40:32.069818 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2d6c98d-4417-44de-b985-65fb764a68dd-utilities\") pod \"redhat-operators-t7fqv\" (UID: \"a2d6c98d-4417-44de-b985-65fb764a68dd\") " pod="openshift-marketplace/redhat-operators-t7fqv" Dec 03 00:40:32 crc kubenswrapper[3561]: I1203 00:40:32.069925 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2d6c98d-4417-44de-b985-65fb764a68dd-catalog-content\") pod \"redhat-operators-t7fqv\" (UID: \"a2d6c98d-4417-44de-b985-65fb764a68dd\") " pod="openshift-marketplace/redhat-operators-t7fqv" Dec 03 00:40:32 crc kubenswrapper[3561]: I1203 00:40:32.092652 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lf4p\" (UniqueName: \"kubernetes.io/projected/a2d6c98d-4417-44de-b985-65fb764a68dd-kube-api-access-7lf4p\") pod \"redhat-operators-t7fqv\" (UID: \"a2d6c98d-4417-44de-b985-65fb764a68dd\") " pod="openshift-marketplace/redhat-operators-t7fqv" Dec 03 00:40:32 crc kubenswrapper[3561]: I1203 00:40:32.237666 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7fqv" Dec 03 00:40:32 crc kubenswrapper[3561]: I1203 00:40:32.481223 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t7fqv"] Dec 03 00:40:33 crc kubenswrapper[3561]: I1203 00:40:33.447591 3561 generic.go:334] "Generic (PLEG): container finished" podID="a2d6c98d-4417-44de-b985-65fb764a68dd" containerID="ec1d691df69efe18db89d8e1487c00116b6573d4671845553132043e2a6c682b" exitCode=0 Dec 03 00:40:33 crc kubenswrapper[3561]: I1203 00:40:33.447724 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7fqv" event={"ID":"a2d6c98d-4417-44de-b985-65fb764a68dd","Type":"ContainerDied","Data":"ec1d691df69efe18db89d8e1487c00116b6573d4671845553132043e2a6c682b"} Dec 03 00:40:33 crc kubenswrapper[3561]: I1203 00:40:33.447860 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7fqv" event={"ID":"a2d6c98d-4417-44de-b985-65fb764a68dd","Type":"ContainerStarted","Data":"bdd0282eb0949311898fca1a09f4944d03154f251ed830fa13bfdf427eadb299"} Dec 03 00:40:34 crc kubenswrapper[3561]: I1203 00:40:34.455465 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7fqv" event={"ID":"a2d6c98d-4417-44de-b985-65fb764a68dd","Type":"ContainerStarted","Data":"672db88c21c21dc63e0bb326692e682e47896f40c60a16a94a49585f3e6033d6"} Dec 03 00:40:39 crc kubenswrapper[3561]: I1203 00:40:39.844678 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-755d7666d5-2zlnx_f9dd874d-0d74-40c8-991f-a10b62bfb3df/cert-manager-controller/0.log" Dec 03 00:40:39 crc kubenswrapper[3561]: I1203 00:40:39.993572 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-6dcc74f67d-p58t5_e3e6daa4-5a3f-453f-a047-c9dbbb1b9e6e/cert-manager-cainjector/0.log" Dec 03 00:40:40 crc kubenswrapper[3561]: I1203 00:40:40.160959 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-58ffc98b58-6q7xn_7499489a-6d26-4a2d-b1e2-ffb9410d42cc/cert-manager-webhook/0.log" Dec 03 00:40:41 crc kubenswrapper[3561]: I1203 00:40:41.592889 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:40:41 crc kubenswrapper[3561]: I1203 00:40:41.593215 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:40:41 crc kubenswrapper[3561]: I1203 00:40:41.593265 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:40:41 crc kubenswrapper[3561]: I1203 00:40:41.593282 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:40:41 crc kubenswrapper[3561]: I1203 00:40:41.593335 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:40:45 crc kubenswrapper[3561]: I1203 00:40:45.520153 3561 generic.go:334] "Generic (PLEG): container finished" podID="a2d6c98d-4417-44de-b985-65fb764a68dd" containerID="672db88c21c21dc63e0bb326692e682e47896f40c60a16a94a49585f3e6033d6" exitCode=0 Dec 03 00:40:45 crc kubenswrapper[3561]: I1203 00:40:45.520244 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7fqv" event={"ID":"a2d6c98d-4417-44de-b985-65fb764a68dd","Type":"ContainerDied","Data":"672db88c21c21dc63e0bb326692e682e47896f40c60a16a94a49585f3e6033d6"} Dec 03 00:40:47 crc kubenswrapper[3561]: I1203 00:40:47.562095 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7fqv" event={"ID":"a2d6c98d-4417-44de-b985-65fb764a68dd","Type":"ContainerStarted","Data":"356c78bb44fa9851c6aa8517ead3319d7d7b0a5316c244df320e1d9b76bfd75f"} Dec 03 00:40:47 crc kubenswrapper[3561]: I1203 00:40:47.600024 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t7fqv" podStartSLOduration=4.238456885 podStartE2EDuration="16.59997331s" podCreationTimestamp="2025-12-03 00:40:31 +0000 UTC" firstStartedPulling="2025-12-03 00:40:33.450886231 +0000 UTC m=+2032.231320489" lastFinishedPulling="2025-12-03 00:40:45.812402656 +0000 UTC m=+2044.592836914" observedRunningTime="2025-12-03 00:40:47.595960414 +0000 UTC m=+2046.376394692" watchObservedRunningTime="2025-12-03 00:40:47.59997331 +0000 UTC m=+2046.380407578" Dec 03 00:40:52 crc kubenswrapper[3561]: I1203 00:40:52.237983 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t7fqv" Dec 03 00:40:52 crc kubenswrapper[3561]: I1203 00:40:52.238530 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t7fqv" Dec 03 00:40:53 crc kubenswrapper[3561]: I1203 00:40:53.340915 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t7fqv" podUID="a2d6c98d-4417-44de-b985-65fb764a68dd" containerName="registry-server" probeResult="failure" output=< Dec 03 00:40:53 crc kubenswrapper[3561]: timeout: failed to connect service ":50051" within 1s Dec 03 00:40:53 crc kubenswrapper[3561]: > Dec 03 00:41:01 crc kubenswrapper[3561]: I1203 00:41:01.315079 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff_0eb6f441-29ca-4f0c-a7e9-69c5dee817e8/util/0.log" Dec 03 00:41:01 crc kubenswrapper[3561]: I1203 00:41:01.510298 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff_0eb6f441-29ca-4f0c-a7e9-69c5dee817e8/util/0.log" Dec 03 00:41:01 crc kubenswrapper[3561]: I1203 00:41:01.535423 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff_0eb6f441-29ca-4f0c-a7e9-69c5dee817e8/pull/0.log" Dec 03 00:41:01 crc kubenswrapper[3561]: I1203 00:41:01.597761 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff_0eb6f441-29ca-4f0c-a7e9-69c5dee817e8/pull/0.log" Dec 03 00:41:01 crc kubenswrapper[3561]: I1203 00:41:01.773929 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff_0eb6f441-29ca-4f0c-a7e9-69c5dee817e8/util/0.log" Dec 03 00:41:01 crc kubenswrapper[3561]: I1203 00:41:01.812820 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff_0eb6f441-29ca-4f0c-a7e9-69c5dee817e8/pull/0.log" Dec 03 00:41:01 crc kubenswrapper[3561]: I1203 00:41:01.845999 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69vxsff_0eb6f441-29ca-4f0c-a7e9-69c5dee817e8/extract/0.log" Dec 03 00:41:01 crc kubenswrapper[3561]: I1203 00:41:01.965434 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm_9730140c-48cc-4687-ba52-9049cf40283e/util/0.log" Dec 03 00:41:02 crc kubenswrapper[3561]: I1203 00:41:02.156195 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm_9730140c-48cc-4687-ba52-9049cf40283e/pull/0.log" Dec 03 00:41:02 crc kubenswrapper[3561]: I1203 00:41:02.190867 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm_9730140c-48cc-4687-ba52-9049cf40283e/pull/0.log" Dec 03 00:41:02 crc kubenswrapper[3561]: I1203 00:41:02.221801 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm_9730140c-48cc-4687-ba52-9049cf40283e/util/0.log" Dec 03 00:41:02 crc kubenswrapper[3561]: I1203 00:41:02.429352 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t7fqv" Dec 03 00:41:02 crc kubenswrapper[3561]: I1203 00:41:02.517210 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t7fqv" Dec 03 00:41:02 crc kubenswrapper[3561]: I1203 00:41:02.560129 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t7fqv"] Dec 03 00:41:02 crc kubenswrapper[3561]: I1203 00:41:02.567117 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm_9730140c-48cc-4687-ba52-9049cf40283e/util/0.log" Dec 03 00:41:02 crc kubenswrapper[3561]: I1203 00:41:02.589828 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm_9730140c-48cc-4687-ba52-9049cf40283e/pull/0.log" Dec 03 00:41:02 crc kubenswrapper[3561]: I1203 00:41:02.663305 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210zcjnm_9730140c-48cc-4687-ba52-9049cf40283e/extract/0.log" Dec 03 00:41:02 crc kubenswrapper[3561]: I1203 00:41:02.789791 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl_b7b8992b-c566-4f5b-830e-b6754d5b0c22/util/0.log" Dec 03 00:41:03 crc kubenswrapper[3561]: I1203 00:41:03.009209 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl_b7b8992b-c566-4f5b-830e-b6754d5b0c22/util/0.log" Dec 03 00:41:03 crc kubenswrapper[3561]: I1203 00:41:03.040882 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl_b7b8992b-c566-4f5b-830e-b6754d5b0c22/pull/0.log" Dec 03 00:41:03 crc kubenswrapper[3561]: I1203 00:41:03.096691 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl_b7b8992b-c566-4f5b-830e-b6754d5b0c22/pull/0.log" Dec 03 00:41:03 crc kubenswrapper[3561]: I1203 00:41:03.236710 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl_b7b8992b-c566-4f5b-830e-b6754d5b0c22/extract/0.log" Dec 03 00:41:03 crc kubenswrapper[3561]: I1203 00:41:03.315580 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl_b7b8992b-c566-4f5b-830e-b6754d5b0c22/util/0.log" Dec 03 00:41:03 crc kubenswrapper[3561]: I1203 00:41:03.335304 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8kdsl_b7b8992b-c566-4f5b-830e-b6754d5b0c22/pull/0.log" Dec 03 00:41:03 crc kubenswrapper[3561]: I1203 00:41:03.499146 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x_8141457f-4211-4f39-a116-f6d971976b48/util/0.log" Dec 03 00:41:03 crc kubenswrapper[3561]: I1203 00:41:03.617117 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x_8141457f-4211-4f39-a116-f6d971976b48/util/0.log" Dec 03 00:41:03 crc kubenswrapper[3561]: I1203 00:41:03.641766 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x_8141457f-4211-4f39-a116-f6d971976b48/pull/0.log" Dec 03 00:41:03 crc kubenswrapper[3561]: I1203 00:41:03.696144 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x_8141457f-4211-4f39-a116-f6d971976b48/pull/0.log" Dec 03 00:41:03 crc kubenswrapper[3561]: I1203 00:41:03.842305 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t7fqv" podUID="a2d6c98d-4417-44de-b985-65fb764a68dd" containerName="registry-server" containerID="cri-o://356c78bb44fa9851c6aa8517ead3319d7d7b0a5316c244df320e1d9b76bfd75f" gracePeriod=2 Dec 03 00:41:03 crc kubenswrapper[3561]: I1203 00:41:03.868643 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x_8141457f-4211-4f39-a116-f6d971976b48/util/0.log" Dec 03 00:41:03 crc kubenswrapper[3561]: I1203 00:41:03.922086 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x_8141457f-4211-4f39-a116-f6d971976b48/pull/0.log" Dec 03 00:41:04 crc kubenswrapper[3561]: I1203 00:41:04.422635 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p5z9s_436d7366-bd91-4ff3-be8f-88da5d161203/extract-utilities/0.log" Dec 03 00:41:04 crc kubenswrapper[3561]: I1203 00:41:04.451038 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p5z9s_436d7366-bd91-4ff3-be8f-88da5d161203/extract-content/0.log" Dec 03 00:41:04 crc kubenswrapper[3561]: I1203 00:41:04.634994 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p5z9s_436d7366-bd91-4ff3-be8f-88da5d161203/extract-content/0.log" Dec 03 00:41:04 crc kubenswrapper[3561]: I1203 00:41:04.911892 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p5z9s_436d7366-bd91-4ff3-be8f-88da5d161203/extract-content/0.log" Dec 03 00:41:04 crc kubenswrapper[3561]: I1203 00:41:04.948710 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p5z9s_436d7366-bd91-4ff3-be8f-88da5d161203/extract-utilities/0.log" Dec 03 00:41:05 crc kubenswrapper[3561]: I1203 00:41:05.153635 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p5z9s_436d7366-bd91-4ff3-be8f-88da5d161203/registry-server/0.log" Dec 03 00:41:05 crc kubenswrapper[3561]: I1203 00:41:05.483494 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pqpdr_d4f7dbd8-6337-441a-8572-7eb95a3cb2b4/extract-utilities/0.log" Dec 03 00:41:05 crc kubenswrapper[3561]: I1203 00:41:05.484025 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p5z9s_436d7366-bd91-4ff3-be8f-88da5d161203/extract-utilities/0.log" Dec 03 00:41:05 crc kubenswrapper[3561]: I1203 00:41:05.484195 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej6p6x_8141457f-4211-4f39-a116-f6d971976b48/extract/0.log" Dec 03 00:41:05 crc kubenswrapper[3561]: I1203 00:41:05.542777 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pqpdr_d4f7dbd8-6337-441a-8572-7eb95a3cb2b4/extract-utilities/0.log" Dec 03 00:41:05 crc kubenswrapper[3561]: I1203 00:41:05.564595 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pqpdr_d4f7dbd8-6337-441a-8572-7eb95a3cb2b4/extract-content/0.log" Dec 03 00:41:05 crc kubenswrapper[3561]: I1203 00:41:05.677413 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pqpdr_d4f7dbd8-6337-441a-8572-7eb95a3cb2b4/extract-content/0.log" Dec 03 00:41:05 crc kubenswrapper[3561]: I1203 00:41:05.862457 3561 generic.go:334] "Generic (PLEG): container finished" podID="a2d6c98d-4417-44de-b985-65fb764a68dd" containerID="356c78bb44fa9851c6aa8517ead3319d7d7b0a5316c244df320e1d9b76bfd75f" exitCode=0 Dec 03 00:41:05 crc kubenswrapper[3561]: I1203 00:41:05.862506 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7fqv" event={"ID":"a2d6c98d-4417-44de-b985-65fb764a68dd","Type":"ContainerDied","Data":"356c78bb44fa9851c6aa8517ead3319d7d7b0a5316c244df320e1d9b76bfd75f"} Dec 03 00:41:05 crc kubenswrapper[3561]: I1203 00:41:05.953427 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pqpdr_d4f7dbd8-6337-441a-8572-7eb95a3cb2b4/extract-content/0.log" Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.025405 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pqpdr_d4f7dbd8-6337-441a-8572-7eb95a3cb2b4/extract-utilities/0.log" Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.028874 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-xmpf5_054d1742-0d77-4532-8193-ddbc28411371/marketplace-operator/0.log" Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.038623 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pqpdr_d4f7dbd8-6337-441a-8572-7eb95a3cb2b4/registry-server/0.log" Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.139261 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7fqv" Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.249253 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nqmqd_bb2ea96b-ff13-4771-b1d0-c04ee7903248/extract-utilities/0.log" Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.283725 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2d6c98d-4417-44de-b985-65fb764a68dd-utilities\") pod \"a2d6c98d-4417-44de-b985-65fb764a68dd\" (UID: \"a2d6c98d-4417-44de-b985-65fb764a68dd\") " Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.283944 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lf4p\" (UniqueName: \"kubernetes.io/projected/a2d6c98d-4417-44de-b985-65fb764a68dd-kube-api-access-7lf4p\") pod \"a2d6c98d-4417-44de-b985-65fb764a68dd\" (UID: \"a2d6c98d-4417-44de-b985-65fb764a68dd\") " Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.283991 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2d6c98d-4417-44de-b985-65fb764a68dd-catalog-content\") pod \"a2d6c98d-4417-44de-b985-65fb764a68dd\" (UID: \"a2d6c98d-4417-44de-b985-65fb764a68dd\") " Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.284807 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2d6c98d-4417-44de-b985-65fb764a68dd-utilities" (OuterVolumeSpecName: "utilities") pod "a2d6c98d-4417-44de-b985-65fb764a68dd" (UID: "a2d6c98d-4417-44de-b985-65fb764a68dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.291229 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2d6c98d-4417-44de-b985-65fb764a68dd-kube-api-access-7lf4p" (OuterVolumeSpecName: "kube-api-access-7lf4p") pod "a2d6c98d-4417-44de-b985-65fb764a68dd" (UID: "a2d6c98d-4417-44de-b985-65fb764a68dd"). InnerVolumeSpecName "kube-api-access-7lf4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.386167 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2d6c98d-4417-44de-b985-65fb764a68dd-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.386205 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7lf4p\" (UniqueName: \"kubernetes.io/projected/a2d6c98d-4417-44de-b985-65fb764a68dd-kube-api-access-7lf4p\") on node \"crc\" DevicePath \"\"" Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.483273 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nqmqd_bb2ea96b-ff13-4771-b1d0-c04ee7903248/extract-utilities/0.log" Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.484570 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nqmqd_bb2ea96b-ff13-4771-b1d0-c04ee7903248/extract-content/0.log" Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.521945 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nqmqd_bb2ea96b-ff13-4771-b1d0-c04ee7903248/extract-content/0.log" Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.870402 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7fqv" event={"ID":"a2d6c98d-4417-44de-b985-65fb764a68dd","Type":"ContainerDied","Data":"bdd0282eb0949311898fca1a09f4944d03154f251ed830fa13bfdf427eadb299"} Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.871284 3561 scope.go:117] "RemoveContainer" containerID="356c78bb44fa9851c6aa8517ead3319d7d7b0a5316c244df320e1d9b76bfd75f" Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.870763 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7fqv" Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.885287 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nqmqd_bb2ea96b-ff13-4771-b1d0-c04ee7903248/extract-utilities/0.log" Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.910146 3561 scope.go:117] "RemoveContainer" containerID="672db88c21c21dc63e0bb326692e682e47896f40c60a16a94a49585f3e6033d6" Dec 03 00:41:06 crc kubenswrapper[3561]: I1203 00:41:06.998528 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nqmqd_bb2ea96b-ff13-4771-b1d0-c04ee7903248/registry-server/0.log" Dec 03 00:41:07 crc kubenswrapper[3561]: I1203 00:41:07.007851 3561 scope.go:117] "RemoveContainer" containerID="ec1d691df69efe18db89d8e1487c00116b6573d4671845553132043e2a6c682b" Dec 03 00:41:07 crc kubenswrapper[3561]: E1203 00:41:07.084664 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec1d691df69efe18db89d8e1487c00116b6573d4671845553132043e2a6c682b\": container with ID starting with ec1d691df69efe18db89d8e1487c00116b6573d4671845553132043e2a6c682b not found: ID does not exist" containerID="ec1d691df69efe18db89d8e1487c00116b6573d4671845553132043e2a6c682b" Dec 03 00:41:07 crc kubenswrapper[3561]: I1203 00:41:07.108707 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nqmqd_bb2ea96b-ff13-4771-b1d0-c04ee7903248/extract-content/0.log" Dec 03 00:41:07 crc kubenswrapper[3561]: I1203 00:41:07.193511 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2d6c98d-4417-44de-b985-65fb764a68dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2d6c98d-4417-44de-b985-65fb764a68dd" (UID: "a2d6c98d-4417-44de-b985-65fb764a68dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:41:07 crc kubenswrapper[3561]: I1203 00:41:07.202647 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2d6c98d-4417-44de-b985-65fb764a68dd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:41:07 crc kubenswrapper[3561]: E1203 00:41:07.276234 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672db88c21c21dc63e0bb326692e682e47896f40c60a16a94a49585f3e6033d6\": container with ID starting with 672db88c21c21dc63e0bb326692e682e47896f40c60a16a94a49585f3e6033d6 not found: ID does not exist" containerID="672db88c21c21dc63e0bb326692e682e47896f40c60a16a94a49585f3e6033d6" Dec 03 00:41:07 crc kubenswrapper[3561]: I1203 00:41:07.503681 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t7fqv"] Dec 03 00:41:07 crc kubenswrapper[3561]: I1203 00:41:07.509163 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t7fqv"] Dec 03 00:41:07 crc kubenswrapper[3561]: I1203 00:41:07.672665 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2d6c98d-4417-44de-b985-65fb764a68dd" path="/var/lib/kubelet/pods/a2d6c98d-4417-44de-b985-65fb764a68dd/volumes" Dec 03 00:41:23 crc kubenswrapper[3561]: I1203 00:41:23.031865 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-864b67f9b9-vw4dk_c267c58e-aca9-4d40-9433-6ac42b961c69/prometheus-operator/0.log" Dec 03 00:41:23 crc kubenswrapper[3561]: I1203 00:41:23.160417 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-644ff5b658-5bhsx_d8202e2e-7630-4d92-aede-476610ebb07c/prometheus-operator-admission-webhook/0.log" Dec 03 00:41:23 crc kubenswrapper[3561]: I1203 00:41:23.255426 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-644ff5b658-l678j_04b977ab-167b-4c68-8e41-ab6cae4d68c0/prometheus-operator-admission-webhook/0.log" Dec 03 00:41:23 crc kubenswrapper[3561]: I1203 00:41:23.378661 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-65df589ff7-p7p4m_e5e45422-f52d-4358-acd4-50f60e173df6/operator/0.log" Dec 03 00:41:23 crc kubenswrapper[3561]: I1203 00:41:23.420502 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-574fd8d65d-gxgqt_c240c075-98b5-4b3e-8bf2-4f7f17b715ba/perses-operator/0.log" Dec 03 00:41:27 crc kubenswrapper[3561]: I1203 00:41:27.623516 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:41:27 crc kubenswrapper[3561]: I1203 00:41:27.623926 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:41:41 crc kubenswrapper[3561]: I1203 00:41:41.593589 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:41:41 crc kubenswrapper[3561]: I1203 00:41:41.594344 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:41:41 crc kubenswrapper[3561]: I1203 00:41:41.594423 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:41:41 crc kubenswrapper[3561]: I1203 00:41:41.594452 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:41:41 crc kubenswrapper[3561]: I1203 00:41:41.594499 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:41:57 crc kubenswrapper[3561]: I1203 00:41:57.623018 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:41:57 crc kubenswrapper[3561]: I1203 00:41:57.623604 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:42:10 crc kubenswrapper[3561]: I1203 00:42:10.346595 3561 generic.go:334] "Generic (PLEG): container finished" podID="6d693d29-f2f8-42aa-b44f-fd3e42e6446b" containerID="673f525670714efeaaa0c5c1afd0f3a7cb5ffabe2e484f0880ed96bafdb429f5" exitCode=0 Dec 03 00:42:10 crc kubenswrapper[3561]: I1203 00:42:10.346659 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t82n8/must-gather-rl25q" event={"ID":"6d693d29-f2f8-42aa-b44f-fd3e42e6446b","Type":"ContainerDied","Data":"673f525670714efeaaa0c5c1afd0f3a7cb5ffabe2e484f0880ed96bafdb429f5"} Dec 03 00:42:10 crc kubenswrapper[3561]: I1203 00:42:10.349722 3561 scope.go:117] "RemoveContainer" containerID="673f525670714efeaaa0c5c1afd0f3a7cb5ffabe2e484f0880ed96bafdb429f5" Dec 03 00:42:11 crc kubenswrapper[3561]: I1203 00:42:11.350145 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t82n8_must-gather-rl25q_6d693d29-f2f8-42aa-b44f-fd3e42e6446b/gather/0.log" Dec 03 00:42:17 crc kubenswrapper[3561]: I1203 00:42:17.985282 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t82n8/must-gather-rl25q"] Dec 03 00:42:17 crc kubenswrapper[3561]: I1203 00:42:17.986326 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-must-gather-t82n8/must-gather-rl25q" podUID="6d693d29-f2f8-42aa-b44f-fd3e42e6446b" containerName="copy" containerID="cri-o://c94d5f5f9b4cda49bf3f11b6c2bff888b2cbb2215db035e6bfd4cf78ef30e407" gracePeriod=2 Dec 03 00:42:17 crc kubenswrapper[3561]: I1203 00:42:17.999153 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t82n8/must-gather-rl25q"] Dec 03 00:42:18 crc kubenswrapper[3561]: I1203 00:42:18.282532 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t82n8_must-gather-rl25q_6d693d29-f2f8-42aa-b44f-fd3e42e6446b/copy/0.log" Dec 03 00:42:18 crc kubenswrapper[3561]: I1203 00:42:18.283350 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t82n8/must-gather-rl25q" Dec 03 00:42:18 crc kubenswrapper[3561]: I1203 00:42:18.335590 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbnbc\" (UniqueName: \"kubernetes.io/projected/6d693d29-f2f8-42aa-b44f-fd3e42e6446b-kube-api-access-wbnbc\") pod \"6d693d29-f2f8-42aa-b44f-fd3e42e6446b\" (UID: \"6d693d29-f2f8-42aa-b44f-fd3e42e6446b\") " Dec 03 00:42:18 crc kubenswrapper[3561]: I1203 00:42:18.335820 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6d693d29-f2f8-42aa-b44f-fd3e42e6446b-must-gather-output\") pod \"6d693d29-f2f8-42aa-b44f-fd3e42e6446b\" (UID: \"6d693d29-f2f8-42aa-b44f-fd3e42e6446b\") " Dec 03 00:42:18 crc kubenswrapper[3561]: I1203 00:42:18.342713 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d693d29-f2f8-42aa-b44f-fd3e42e6446b-kube-api-access-wbnbc" (OuterVolumeSpecName: "kube-api-access-wbnbc") pod "6d693d29-f2f8-42aa-b44f-fd3e42e6446b" (UID: "6d693d29-f2f8-42aa-b44f-fd3e42e6446b"). InnerVolumeSpecName "kube-api-access-wbnbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:42:18 crc kubenswrapper[3561]: I1203 00:42:18.396177 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d693d29-f2f8-42aa-b44f-fd3e42e6446b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6d693d29-f2f8-42aa-b44f-fd3e42e6446b" (UID: "6d693d29-f2f8-42aa-b44f-fd3e42e6446b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:42:18 crc kubenswrapper[3561]: I1203 00:42:18.437740 3561 reconciler_common.go:300] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6d693d29-f2f8-42aa-b44f-fd3e42e6446b-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 03 00:42:18 crc kubenswrapper[3561]: I1203 00:42:18.437784 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wbnbc\" (UniqueName: \"kubernetes.io/projected/6d693d29-f2f8-42aa-b44f-fd3e42e6446b-kube-api-access-wbnbc\") on node \"crc\" DevicePath \"\"" Dec 03 00:42:18 crc kubenswrapper[3561]: I1203 00:42:18.496423 3561 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t82n8_must-gather-rl25q_6d693d29-f2f8-42aa-b44f-fd3e42e6446b/copy/0.log" Dec 03 00:42:18 crc kubenswrapper[3561]: I1203 00:42:18.496794 3561 generic.go:334] "Generic (PLEG): container finished" podID="6d693d29-f2f8-42aa-b44f-fd3e42e6446b" containerID="c94d5f5f9b4cda49bf3f11b6c2bff888b2cbb2215db035e6bfd4cf78ef30e407" exitCode=143 Dec 03 00:42:18 crc kubenswrapper[3561]: I1203 00:42:18.496834 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t82n8/must-gather-rl25q" Dec 03 00:42:18 crc kubenswrapper[3561]: I1203 00:42:18.496843 3561 scope.go:117] "RemoveContainer" containerID="c94d5f5f9b4cda49bf3f11b6c2bff888b2cbb2215db035e6bfd4cf78ef30e407" Dec 03 00:42:18 crc kubenswrapper[3561]: I1203 00:42:18.545525 3561 scope.go:117] "RemoveContainer" containerID="673f525670714efeaaa0c5c1afd0f3a7cb5ffabe2e484f0880ed96bafdb429f5" Dec 03 00:42:18 crc kubenswrapper[3561]: I1203 00:42:18.696827 3561 scope.go:117] "RemoveContainer" containerID="c94d5f5f9b4cda49bf3f11b6c2bff888b2cbb2215db035e6bfd4cf78ef30e407" Dec 03 00:42:18 crc kubenswrapper[3561]: E1203 00:42:18.697300 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c94d5f5f9b4cda49bf3f11b6c2bff888b2cbb2215db035e6bfd4cf78ef30e407\": container with ID starting with c94d5f5f9b4cda49bf3f11b6c2bff888b2cbb2215db035e6bfd4cf78ef30e407 not found: ID does not exist" containerID="c94d5f5f9b4cda49bf3f11b6c2bff888b2cbb2215db035e6bfd4cf78ef30e407" Dec 03 00:42:18 crc kubenswrapper[3561]: I1203 00:42:18.697353 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c94d5f5f9b4cda49bf3f11b6c2bff888b2cbb2215db035e6bfd4cf78ef30e407"} err="failed to get container status \"c94d5f5f9b4cda49bf3f11b6c2bff888b2cbb2215db035e6bfd4cf78ef30e407\": rpc error: code = NotFound desc = could not find container \"c94d5f5f9b4cda49bf3f11b6c2bff888b2cbb2215db035e6bfd4cf78ef30e407\": container with ID starting with c94d5f5f9b4cda49bf3f11b6c2bff888b2cbb2215db035e6bfd4cf78ef30e407 not found: ID does not exist" Dec 03 00:42:18 crc kubenswrapper[3561]: I1203 00:42:18.697365 3561 scope.go:117] "RemoveContainer" containerID="673f525670714efeaaa0c5c1afd0f3a7cb5ffabe2e484f0880ed96bafdb429f5" Dec 03 00:42:18 crc kubenswrapper[3561]: E1203 00:42:18.697812 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"673f525670714efeaaa0c5c1afd0f3a7cb5ffabe2e484f0880ed96bafdb429f5\": container with ID starting with 673f525670714efeaaa0c5c1afd0f3a7cb5ffabe2e484f0880ed96bafdb429f5 not found: ID does not exist" containerID="673f525670714efeaaa0c5c1afd0f3a7cb5ffabe2e484f0880ed96bafdb429f5" Dec 03 00:42:18 crc kubenswrapper[3561]: I1203 00:42:18.697862 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"673f525670714efeaaa0c5c1afd0f3a7cb5ffabe2e484f0880ed96bafdb429f5"} err="failed to get container status \"673f525670714efeaaa0c5c1afd0f3a7cb5ffabe2e484f0880ed96bafdb429f5\": rpc error: code = NotFound desc = could not find container \"673f525670714efeaaa0c5c1afd0f3a7cb5ffabe2e484f0880ed96bafdb429f5\": container with ID starting with 673f525670714efeaaa0c5c1afd0f3a7cb5ffabe2e484f0880ed96bafdb429f5 not found: ID does not exist" Dec 03 00:42:19 crc kubenswrapper[3561]: I1203 00:42:19.670965 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d693d29-f2f8-42aa-b44f-fd3e42e6446b" path="/var/lib/kubelet/pods/6d693d29-f2f8-42aa-b44f-fd3e42e6446b/volumes" Dec 03 00:42:27 crc kubenswrapper[3561]: I1203 00:42:27.624056 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:42:27 crc kubenswrapper[3561]: I1203 00:42:27.624855 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:42:27 crc kubenswrapper[3561]: I1203 00:42:27.624931 3561 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 03 00:42:27 crc kubenswrapper[3561]: I1203 00:42:27.626204 3561 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"95e8b59f3f10a4f469fb6c0f2480e9f9ae30881de6c52f1b19a8f907c17bdc12"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:42:27 crc kubenswrapper[3561]: I1203 00:42:27.626419 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://95e8b59f3f10a4f469fb6c0f2480e9f9ae30881de6c52f1b19a8f907c17bdc12" gracePeriod=600 Dec 03 00:42:28 crc kubenswrapper[3561]: I1203 00:42:28.602848 3561 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="95e8b59f3f10a4f469fb6c0f2480e9f9ae30881de6c52f1b19a8f907c17bdc12" exitCode=0 Dec 03 00:42:28 crc kubenswrapper[3561]: I1203 00:42:28.603226 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"95e8b59f3f10a4f469fb6c0f2480e9f9ae30881de6c52f1b19a8f907c17bdc12"} Dec 03 00:42:28 crc kubenswrapper[3561]: I1203 00:42:28.603252 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"3edf8406ea037d8367d25baea39c86075a8ed0211b09ae6a6ff217af3148f40d"} Dec 03 00:42:28 crc kubenswrapper[3561]: I1203 00:42:28.603272 3561 scope.go:117] "RemoveContainer" containerID="7c4a1803932e99f523e31dbb53b870c7a72542208cfb7364619a6a614e48bcb6" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.368818 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bn2nm"] Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.369779 3561 topology_manager.go:215] "Topology Admit Handler" podUID="1a6b7b3f-37fc-48c6-8153-3cecb10b4b15" podNamespace="openshift-marketplace" podName="certified-operators-bn2nm" Dec 03 00:42:36 crc kubenswrapper[3561]: E1203 00:42:36.370124 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a2d6c98d-4417-44de-b985-65fb764a68dd" containerName="registry-server" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.370145 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d6c98d-4417-44de-b985-65fb764a68dd" containerName="registry-server" Dec 03 00:42:36 crc kubenswrapper[3561]: E1203 00:42:36.370213 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a2d6c98d-4417-44de-b985-65fb764a68dd" containerName="extract-content" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.370231 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d6c98d-4417-44de-b985-65fb764a68dd" containerName="extract-content" Dec 03 00:42:36 crc kubenswrapper[3561]: E1203 00:42:36.370268 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6d693d29-f2f8-42aa-b44f-fd3e42e6446b" containerName="gather" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.370282 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d693d29-f2f8-42aa-b44f-fd3e42e6446b" containerName="gather" Dec 03 00:42:36 crc kubenswrapper[3561]: E1203 00:42:36.370325 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6d693d29-f2f8-42aa-b44f-fd3e42e6446b" containerName="copy" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.370338 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d693d29-f2f8-42aa-b44f-fd3e42e6446b" containerName="copy" Dec 03 00:42:36 crc kubenswrapper[3561]: E1203 00:42:36.370405 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a2d6c98d-4417-44de-b985-65fb764a68dd" containerName="extract-utilities" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.370420 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d6c98d-4417-44de-b985-65fb764a68dd" containerName="extract-utilities" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.370735 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d693d29-f2f8-42aa-b44f-fd3e42e6446b" containerName="gather" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.370763 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2d6c98d-4417-44de-b985-65fb764a68dd" containerName="registry-server" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.370802 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d693d29-f2f8-42aa-b44f-fd3e42e6446b" containerName="copy" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.373224 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn2nm" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.386105 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-operators-czz22"] Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.386272 3561 topology_manager.go:215] "Topology Admit Handler" podUID="f2b7737b-5c31-49ca-8649-b274de22a106" podNamespace="service-telemetry" podName="service-telemetry-framework-operators-czz22" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.387565 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-czz22" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.394882 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-czz22"] Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.426926 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bn2nm"] Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.476954 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-catalog-content\") pod \"certified-operators-bn2nm\" (UID: \"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15\") " pod="openshift-marketplace/certified-operators-bn2nm" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.477028 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-utilities\") pod \"certified-operators-bn2nm\" (UID: \"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15\") " pod="openshift-marketplace/certified-operators-bn2nm" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.477074 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7ghm\" (UniqueName: \"kubernetes.io/projected/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-kube-api-access-p7ghm\") pod \"certified-operators-bn2nm\" (UID: \"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15\") " pod="openshift-marketplace/certified-operators-bn2nm" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.578355 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-utilities\") pod \"certified-operators-bn2nm\" (UID: \"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15\") " pod="openshift-marketplace/certified-operators-bn2nm" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.578441 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ghm\" (UniqueName: \"kubernetes.io/projected/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-kube-api-access-p7ghm\") pod \"certified-operators-bn2nm\" (UID: \"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15\") " pod="openshift-marketplace/certified-operators-bn2nm" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.578561 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwwvm\" (UniqueName: \"kubernetes.io/projected/f2b7737b-5c31-49ca-8649-b274de22a106-kube-api-access-fwwvm\") pod \"service-telemetry-framework-operators-czz22\" (UID: \"f2b7737b-5c31-49ca-8649-b274de22a106\") " pod="service-telemetry/service-telemetry-framework-operators-czz22" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.578606 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-catalog-content\") pod \"certified-operators-bn2nm\" (UID: \"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15\") " pod="openshift-marketplace/certified-operators-bn2nm" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.579038 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-utilities\") pod \"certified-operators-bn2nm\" (UID: \"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15\") " pod="openshift-marketplace/certified-operators-bn2nm" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.579397 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-catalog-content\") pod \"certified-operators-bn2nm\" (UID: \"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15\") " pod="openshift-marketplace/certified-operators-bn2nm" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.614581 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7ghm\" (UniqueName: \"kubernetes.io/projected/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-kube-api-access-p7ghm\") pod \"certified-operators-bn2nm\" (UID: \"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15\") " pod="openshift-marketplace/certified-operators-bn2nm" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.679515 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fwwvm\" (UniqueName: \"kubernetes.io/projected/f2b7737b-5c31-49ca-8649-b274de22a106-kube-api-access-fwwvm\") pod \"service-telemetry-framework-operators-czz22\" (UID: \"f2b7737b-5c31-49ca-8649-b274de22a106\") " pod="service-telemetry/service-telemetry-framework-operators-czz22" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.702434 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwwvm\" (UniqueName: \"kubernetes.io/projected/f2b7737b-5c31-49ca-8649-b274de22a106-kube-api-access-fwwvm\") pod \"service-telemetry-framework-operators-czz22\" (UID: \"f2b7737b-5c31-49ca-8649-b274de22a106\") " pod="service-telemetry/service-telemetry-framework-operators-czz22" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.740044 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn2nm" Dec 03 00:42:36 crc kubenswrapper[3561]: I1203 00:42:36.751615 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-czz22" Dec 03 00:42:37 crc kubenswrapper[3561]: I1203 00:42:37.249890 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-czz22"] Dec 03 00:42:37 crc kubenswrapper[3561]: I1203 00:42:37.254055 3561 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 00:42:37 crc kubenswrapper[3561]: I1203 00:42:37.271036 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bn2nm"] Dec 03 00:42:37 crc kubenswrapper[3561]: W1203 00:42:37.275784 3561 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a6b7b3f_37fc_48c6_8153_3cecb10b4b15.slice/crio-90c152b8460b2f29f6a00356f7fd91dcee9e00f5aff746bebd5ee3dbbcead97e WatchSource:0}: Error finding container 90c152b8460b2f29f6a00356f7fd91dcee9e00f5aff746bebd5ee3dbbcead97e: Status 404 returned error can't find the container with id 90c152b8460b2f29f6a00356f7fd91dcee9e00f5aff746bebd5ee3dbbcead97e Dec 03 00:42:37 crc kubenswrapper[3561]: I1203 00:42:37.680627 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-czz22" event={"ID":"f2b7737b-5c31-49ca-8649-b274de22a106","Type":"ContainerStarted","Data":"bb84218ee1507484425d3050aaae76a2096fdb941359593a882aedf976d4534d"} Dec 03 00:42:37 crc kubenswrapper[3561]: I1203 00:42:37.680967 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-czz22" event={"ID":"f2b7737b-5c31-49ca-8649-b274de22a106","Type":"ContainerStarted","Data":"b438c8d5dd78c3642595805815f27af7eaf5e5cb26bd6dc1f6213afcd58c4f81"} Dec 03 00:42:37 crc kubenswrapper[3561]: I1203 00:42:37.682253 3561 generic.go:334] "Generic (PLEG): container finished" podID="1a6b7b3f-37fc-48c6-8153-3cecb10b4b15" containerID="3727f8700ca382703f82edbc729355244877ccc48a5d259a879558555882e526" exitCode=0 Dec 03 00:42:37 crc kubenswrapper[3561]: I1203 00:42:37.682306 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn2nm" event={"ID":"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15","Type":"ContainerDied","Data":"3727f8700ca382703f82edbc729355244877ccc48a5d259a879558555882e526"} Dec 03 00:42:37 crc kubenswrapper[3561]: I1203 00:42:37.682328 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn2nm" event={"ID":"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15","Type":"ContainerStarted","Data":"90c152b8460b2f29f6a00356f7fd91dcee9e00f5aff746bebd5ee3dbbcead97e"} Dec 03 00:42:37 crc kubenswrapper[3561]: I1203 00:42:37.704340 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-operators-czz22" podStartSLOduration=1.632652598 podStartE2EDuration="1.70428641s" podCreationTimestamp="2025-12-03 00:42:36 +0000 UTC" firstStartedPulling="2025-12-03 00:42:37.253675265 +0000 UTC m=+2156.034109543" lastFinishedPulling="2025-12-03 00:42:37.325309097 +0000 UTC m=+2156.105743355" observedRunningTime="2025-12-03 00:42:37.696938581 +0000 UTC m=+2156.477372839" watchObservedRunningTime="2025-12-03 00:42:37.70428641 +0000 UTC m=+2156.484720678" Dec 03 00:42:38 crc kubenswrapper[3561]: I1203 00:42:38.689438 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn2nm" event={"ID":"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15","Type":"ContainerStarted","Data":"c0c2353aa08b42a462b0003a26dac4cd376c65d6bfb974f0a6a6c563af170666"} Dec 03 00:42:40 crc kubenswrapper[3561]: I1203 00:42:40.706010 3561 generic.go:334] "Generic (PLEG): container finished" podID="1a6b7b3f-37fc-48c6-8153-3cecb10b4b15" containerID="c0c2353aa08b42a462b0003a26dac4cd376c65d6bfb974f0a6a6c563af170666" exitCode=0 Dec 03 00:42:40 crc kubenswrapper[3561]: I1203 00:42:40.706223 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn2nm" event={"ID":"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15","Type":"ContainerDied","Data":"c0c2353aa08b42a462b0003a26dac4cd376c65d6bfb974f0a6a6c563af170666"} Dec 03 00:42:41 crc kubenswrapper[3561]: I1203 00:42:41.594865 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:42:41 crc kubenswrapper[3561]: I1203 00:42:41.594929 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:42:41 crc kubenswrapper[3561]: I1203 00:42:41.594965 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:42:41 crc kubenswrapper[3561]: I1203 00:42:41.595006 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:42:41 crc kubenswrapper[3561]: I1203 00:42:41.595028 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:42:41 crc kubenswrapper[3561]: I1203 00:42:41.714324 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn2nm" event={"ID":"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15","Type":"ContainerStarted","Data":"7ac3236ff1712481c6259c8130e3b38cfaef1a2ed0bca80da56b2403ed95a196"} Dec 03 00:42:41 crc kubenswrapper[3561]: I1203 00:42:41.737232 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bn2nm" podStartSLOduration=2.370667108 podStartE2EDuration="5.737184764s" podCreationTimestamp="2025-12-03 00:42:36 +0000 UTC" firstStartedPulling="2025-12-03 00:42:37.683953536 +0000 UTC m=+2156.464387794" lastFinishedPulling="2025-12-03 00:42:41.050471192 +0000 UTC m=+2159.830905450" observedRunningTime="2025-12-03 00:42:41.735008706 +0000 UTC m=+2160.515442974" watchObservedRunningTime="2025-12-03 00:42:41.737184764 +0000 UTC m=+2160.517619032" Dec 03 00:42:46 crc kubenswrapper[3561]: I1203 00:42:46.740665 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bn2nm" Dec 03 00:42:46 crc kubenswrapper[3561]: I1203 00:42:46.741301 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bn2nm" Dec 03 00:42:46 crc kubenswrapper[3561]: I1203 00:42:46.752056 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/service-telemetry-framework-operators-czz22" Dec 03 00:42:46 crc kubenswrapper[3561]: I1203 00:42:46.752112 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/service-telemetry-framework-operators-czz22" Dec 03 00:42:46 crc kubenswrapper[3561]: I1203 00:42:46.803301 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/service-telemetry-framework-operators-czz22" Dec 03 00:42:46 crc kubenswrapper[3561]: I1203 00:42:46.872590 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bn2nm" Dec 03 00:42:47 crc kubenswrapper[3561]: I1203 00:42:47.847855 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/service-telemetry-framework-operators-czz22" Dec 03 00:42:47 crc kubenswrapper[3561]: I1203 00:42:47.861038 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bn2nm" Dec 03 00:42:47 crc kubenswrapper[3561]: I1203 00:42:47.893932 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-czz22"] Dec 03 00:42:48 crc kubenswrapper[3561]: I1203 00:42:48.032828 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bn2nm"] Dec 03 00:42:49 crc kubenswrapper[3561]: I1203 00:42:49.756791 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bn2nm" podUID="1a6b7b3f-37fc-48c6-8153-3cecb10b4b15" containerName="registry-server" containerID="cri-o://7ac3236ff1712481c6259c8130e3b38cfaef1a2ed0bca80da56b2403ed95a196" gracePeriod=2 Dec 03 00:42:49 crc kubenswrapper[3561]: I1203 00:42:49.756924 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-operators-czz22" podUID="f2b7737b-5c31-49ca-8649-b274de22a106" containerName="registry-server" containerID="cri-o://bb84218ee1507484425d3050aaae76a2096fdb941359593a882aedf976d4534d" gracePeriod=2 Dec 03 00:42:51 crc kubenswrapper[3561]: I1203 00:42:51.780998 3561 generic.go:334] "Generic (PLEG): container finished" podID="1a6b7b3f-37fc-48c6-8153-3cecb10b4b15" containerID="7ac3236ff1712481c6259c8130e3b38cfaef1a2ed0bca80da56b2403ed95a196" exitCode=0 Dec 03 00:42:51 crc kubenswrapper[3561]: I1203 00:42:51.781253 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn2nm" event={"ID":"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15","Type":"ContainerDied","Data":"7ac3236ff1712481c6259c8130e3b38cfaef1a2ed0bca80da56b2403ed95a196"} Dec 03 00:42:51 crc kubenswrapper[3561]: I1203 00:42:51.788117 3561 generic.go:334] "Generic (PLEG): container finished" podID="f2b7737b-5c31-49ca-8649-b274de22a106" containerID="bb84218ee1507484425d3050aaae76a2096fdb941359593a882aedf976d4534d" exitCode=0 Dec 03 00:42:51 crc kubenswrapper[3561]: I1203 00:42:51.788172 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-czz22" event={"ID":"f2b7737b-5c31-49ca-8649-b274de22a106","Type":"ContainerDied","Data":"bb84218ee1507484425d3050aaae76a2096fdb941359593a882aedf976d4534d"} Dec 03 00:42:51 crc kubenswrapper[3561]: I1203 00:42:51.930364 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn2nm" Dec 03 00:42:51 crc kubenswrapper[3561]: I1203 00:42:51.999886 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-czz22" Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.060064 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-utilities\") pod \"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15\" (UID: \"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15\") " Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.060317 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwwvm\" (UniqueName: \"kubernetes.io/projected/f2b7737b-5c31-49ca-8649-b274de22a106-kube-api-access-fwwvm\") pod \"f2b7737b-5c31-49ca-8649-b274de22a106\" (UID: \"f2b7737b-5c31-49ca-8649-b274de22a106\") " Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.060398 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7ghm\" (UniqueName: \"kubernetes.io/projected/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-kube-api-access-p7ghm\") pod \"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15\" (UID: \"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15\") " Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.060604 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-catalog-content\") pod \"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15\" (UID: \"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15\") " Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.061015 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-utilities" (OuterVolumeSpecName: "utilities") pod "1a6b7b3f-37fc-48c6-8153-3cecb10b4b15" (UID: "1a6b7b3f-37fc-48c6-8153-3cecb10b4b15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.061199 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.067935 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-kube-api-access-p7ghm" (OuterVolumeSpecName: "kube-api-access-p7ghm") pod "1a6b7b3f-37fc-48c6-8153-3cecb10b4b15" (UID: "1a6b7b3f-37fc-48c6-8153-3cecb10b4b15"). InnerVolumeSpecName "kube-api-access-p7ghm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.079452 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b7737b-5c31-49ca-8649-b274de22a106-kube-api-access-fwwvm" (OuterVolumeSpecName: "kube-api-access-fwwvm") pod "f2b7737b-5c31-49ca-8649-b274de22a106" (UID: "f2b7737b-5c31-49ca-8649-b274de22a106"). InnerVolumeSpecName "kube-api-access-fwwvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.162436 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fwwvm\" (UniqueName: \"kubernetes.io/projected/f2b7737b-5c31-49ca-8649-b274de22a106-kube-api-access-fwwvm\") on node \"crc\" DevicePath \"\"" Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.162503 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p7ghm\" (UniqueName: \"kubernetes.io/projected/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-kube-api-access-p7ghm\") on node \"crc\" DevicePath \"\"" Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.327998 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a6b7b3f-37fc-48c6-8153-3cecb10b4b15" (UID: "1a6b7b3f-37fc-48c6-8153-3cecb10b4b15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.365059 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.800277 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn2nm" event={"ID":"1a6b7b3f-37fc-48c6-8153-3cecb10b4b15","Type":"ContainerDied","Data":"90c152b8460b2f29f6a00356f7fd91dcee9e00f5aff746bebd5ee3dbbcead97e"} Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.800326 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn2nm" Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.800340 3561 scope.go:117] "RemoveContainer" containerID="7ac3236ff1712481c6259c8130e3b38cfaef1a2ed0bca80da56b2403ed95a196" Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.803112 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-czz22" event={"ID":"f2b7737b-5c31-49ca-8649-b274de22a106","Type":"ContainerDied","Data":"b438c8d5dd78c3642595805815f27af7eaf5e5cb26bd6dc1f6213afcd58c4f81"} Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.803151 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-czz22" Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.859766 3561 scope.go:117] "RemoveContainer" containerID="c0c2353aa08b42a462b0003a26dac4cd376c65d6bfb974f0a6a6c563af170666" Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.876641 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bn2nm"] Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.884765 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bn2nm"] Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.898927 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-czz22"] Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.899207 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-czz22"] Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.945654 3561 scope.go:117] "RemoveContainer" containerID="3727f8700ca382703f82edbc729355244877ccc48a5d259a879558555882e526" Dec 03 00:42:52 crc kubenswrapper[3561]: I1203 00:42:52.972363 3561 scope.go:117] "RemoveContainer" containerID="bb84218ee1507484425d3050aaae76a2096fdb941359593a882aedf976d4534d" Dec 03 00:42:53 crc kubenswrapper[3561]: I1203 00:42:53.674770 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a6b7b3f-37fc-48c6-8153-3cecb10b4b15" path="/var/lib/kubelet/pods/1a6b7b3f-37fc-48c6-8153-3cecb10b4b15/volumes" Dec 03 00:42:53 crc kubenswrapper[3561]: I1203 00:42:53.676531 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b7737b-5c31-49ca-8649-b274de22a106" path="/var/lib/kubelet/pods/f2b7737b-5c31-49ca-8649-b274de22a106/volumes" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.324103 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-59gkk"] Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.325051 3561 topology_manager.go:215] "Topology Admit Handler" podUID="76b19c0f-1368-48de-8c33-d164ceea836f" podNamespace="openshift-marketplace" podName="community-operators-59gkk" Dec 03 00:43:29 crc kubenswrapper[3561]: E1203 00:43:29.325449 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1a6b7b3f-37fc-48c6-8153-3cecb10b4b15" containerName="registry-server" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.325471 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a6b7b3f-37fc-48c6-8153-3cecb10b4b15" containerName="registry-server" Dec 03 00:43:29 crc kubenswrapper[3561]: E1203 00:43:29.325564 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1a6b7b3f-37fc-48c6-8153-3cecb10b4b15" containerName="extract-utilities" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.325581 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a6b7b3f-37fc-48c6-8153-3cecb10b4b15" containerName="extract-utilities" Dec 03 00:43:29 crc kubenswrapper[3561]: E1203 00:43:29.325604 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="f2b7737b-5c31-49ca-8649-b274de22a106" containerName="registry-server" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.325619 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b7737b-5c31-49ca-8649-b274de22a106" containerName="registry-server" Dec 03 00:43:29 crc kubenswrapper[3561]: E1203 00:43:29.325658 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1a6b7b3f-37fc-48c6-8153-3cecb10b4b15" containerName="extract-content" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.325672 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a6b7b3f-37fc-48c6-8153-3cecb10b4b15" containerName="extract-content" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.325910 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a6b7b3f-37fc-48c6-8153-3cecb10b4b15" containerName="registry-server" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.325933 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b7737b-5c31-49ca-8649-b274de22a106" containerName="registry-server" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.327765 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-59gkk" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.332381 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-59gkk"] Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.381907 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wdlx\" (UniqueName: \"kubernetes.io/projected/76b19c0f-1368-48de-8c33-d164ceea836f-kube-api-access-4wdlx\") pod \"community-operators-59gkk\" (UID: \"76b19c0f-1368-48de-8c33-d164ceea836f\") " pod="openshift-marketplace/community-operators-59gkk" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.382042 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76b19c0f-1368-48de-8c33-d164ceea836f-utilities\") pod \"community-operators-59gkk\" (UID: \"76b19c0f-1368-48de-8c33-d164ceea836f\") " pod="openshift-marketplace/community-operators-59gkk" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.382092 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76b19c0f-1368-48de-8c33-d164ceea836f-catalog-content\") pod \"community-operators-59gkk\" (UID: \"76b19c0f-1368-48de-8c33-d164ceea836f\") " pod="openshift-marketplace/community-operators-59gkk" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.483342 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4wdlx\" (UniqueName: \"kubernetes.io/projected/76b19c0f-1368-48de-8c33-d164ceea836f-kube-api-access-4wdlx\") pod \"community-operators-59gkk\" (UID: \"76b19c0f-1368-48de-8c33-d164ceea836f\") " pod="openshift-marketplace/community-operators-59gkk" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.483423 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76b19c0f-1368-48de-8c33-d164ceea836f-utilities\") pod \"community-operators-59gkk\" (UID: \"76b19c0f-1368-48de-8c33-d164ceea836f\") " pod="openshift-marketplace/community-operators-59gkk" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.483456 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76b19c0f-1368-48de-8c33-d164ceea836f-catalog-content\") pod \"community-operators-59gkk\" (UID: \"76b19c0f-1368-48de-8c33-d164ceea836f\") " pod="openshift-marketplace/community-operators-59gkk" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.483988 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76b19c0f-1368-48de-8c33-d164ceea836f-catalog-content\") pod \"community-operators-59gkk\" (UID: \"76b19c0f-1368-48de-8c33-d164ceea836f\") " pod="openshift-marketplace/community-operators-59gkk" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.484247 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76b19c0f-1368-48de-8c33-d164ceea836f-utilities\") pod \"community-operators-59gkk\" (UID: \"76b19c0f-1368-48de-8c33-d164ceea836f\") " pod="openshift-marketplace/community-operators-59gkk" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.511738 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wdlx\" (UniqueName: \"kubernetes.io/projected/76b19c0f-1368-48de-8c33-d164ceea836f-kube-api-access-4wdlx\") pod \"community-operators-59gkk\" (UID: \"76b19c0f-1368-48de-8c33-d164ceea836f\") " pod="openshift-marketplace/community-operators-59gkk" Dec 03 00:43:29 crc kubenswrapper[3561]: I1203 00:43:29.658453 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-59gkk" Dec 03 00:43:30 crc kubenswrapper[3561]: I1203 00:43:30.138575 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-59gkk"] Dec 03 00:43:31 crc kubenswrapper[3561]: I1203 00:43:31.079770 3561 generic.go:334] "Generic (PLEG): container finished" podID="76b19c0f-1368-48de-8c33-d164ceea836f" containerID="6d6d93096e9963a81ffb2725db3f59e9dd2089c708e39cd8e54d05d8d161d193" exitCode=0 Dec 03 00:43:31 crc kubenswrapper[3561]: I1203 00:43:31.079834 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59gkk" event={"ID":"76b19c0f-1368-48de-8c33-d164ceea836f","Type":"ContainerDied","Data":"6d6d93096e9963a81ffb2725db3f59e9dd2089c708e39cd8e54d05d8d161d193"} Dec 03 00:43:31 crc kubenswrapper[3561]: I1203 00:43:31.079873 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59gkk" event={"ID":"76b19c0f-1368-48de-8c33-d164ceea836f","Type":"ContainerStarted","Data":"71c8e59f8a35889243b975f49459d4c12baeb55ffde0c67661ac68c914be6a59"} Dec 03 00:43:32 crc kubenswrapper[3561]: I1203 00:43:32.087992 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59gkk" event={"ID":"76b19c0f-1368-48de-8c33-d164ceea836f","Type":"ContainerStarted","Data":"cec53e3681dc1d4b916676687d74c98c08c5bd60d1863918089438740cadd954"} Dec 03 00:43:35 crc kubenswrapper[3561]: I1203 00:43:35.113941 3561 generic.go:334] "Generic (PLEG): container finished" podID="76b19c0f-1368-48de-8c33-d164ceea836f" containerID="cec53e3681dc1d4b916676687d74c98c08c5bd60d1863918089438740cadd954" exitCode=0 Dec 03 00:43:35 crc kubenswrapper[3561]: I1203 00:43:35.114061 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59gkk" event={"ID":"76b19c0f-1368-48de-8c33-d164ceea836f","Type":"ContainerDied","Data":"cec53e3681dc1d4b916676687d74c98c08c5bd60d1863918089438740cadd954"} Dec 03 00:43:36 crc kubenswrapper[3561]: I1203 00:43:36.124288 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59gkk" event={"ID":"76b19c0f-1368-48de-8c33-d164ceea836f","Type":"ContainerStarted","Data":"785c9fdda039e85cbaa12dd647d50d882bc60b396ebf79ca25a4520c752e45c9"} Dec 03 00:43:36 crc kubenswrapper[3561]: I1203 00:43:36.163503 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-59gkk" podStartSLOduration=2.811414018 podStartE2EDuration="7.163427158s" podCreationTimestamp="2025-12-03 00:43:29 +0000 UTC" firstStartedPulling="2025-12-03 00:43:31.081784826 +0000 UTC m=+2209.862219124" lastFinishedPulling="2025-12-03 00:43:35.433797976 +0000 UTC m=+2214.214232264" observedRunningTime="2025-12-03 00:43:36.153521348 +0000 UTC m=+2214.933955616" watchObservedRunningTime="2025-12-03 00:43:36.163427158 +0000 UTC m=+2214.943861446" Dec 03 00:43:39 crc kubenswrapper[3561]: I1203 00:43:39.659955 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-59gkk" Dec 03 00:43:39 crc kubenswrapper[3561]: I1203 00:43:39.660353 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-59gkk" Dec 03 00:43:40 crc kubenswrapper[3561]: I1203 00:43:40.775893 3561 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-59gkk" podUID="76b19c0f-1368-48de-8c33-d164ceea836f" containerName="registry-server" probeResult="failure" output=< Dec 03 00:43:40 crc kubenswrapper[3561]: timeout: failed to connect service ":50051" within 1s Dec 03 00:43:40 crc kubenswrapper[3561]: > Dec 03 00:43:41 crc kubenswrapper[3561]: I1203 00:43:41.595329 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:43:41 crc kubenswrapper[3561]: I1203 00:43:41.595423 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:43:41 crc kubenswrapper[3561]: I1203 00:43:41.595500 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:43:41 crc kubenswrapper[3561]: I1203 00:43:41.595602 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:43:41 crc kubenswrapper[3561]: I1203 00:43:41.595641 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:43:49 crc kubenswrapper[3561]: I1203 00:43:49.795770 3561 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-59gkk" Dec 03 00:43:49 crc kubenswrapper[3561]: I1203 00:43:49.923176 3561 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-59gkk" Dec 03 00:43:49 crc kubenswrapper[3561]: I1203 00:43:49.969565 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-59gkk"] Dec 03 00:43:51 crc kubenswrapper[3561]: I1203 00:43:51.380474 3561 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-59gkk" podUID="76b19c0f-1368-48de-8c33-d164ceea836f" containerName="registry-server" containerID="cri-o://785c9fdda039e85cbaa12dd647d50d882bc60b396ebf79ca25a4520c752e45c9" gracePeriod=2 Dec 03 00:43:51 crc kubenswrapper[3561]: I1203 00:43:51.800757 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-59gkk" Dec 03 00:43:51 crc kubenswrapper[3561]: I1203 00:43:51.883220 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wdlx\" (UniqueName: \"kubernetes.io/projected/76b19c0f-1368-48de-8c33-d164ceea836f-kube-api-access-4wdlx\") pod \"76b19c0f-1368-48de-8c33-d164ceea836f\" (UID: \"76b19c0f-1368-48de-8c33-d164ceea836f\") " Dec 03 00:43:51 crc kubenswrapper[3561]: I1203 00:43:51.883337 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76b19c0f-1368-48de-8c33-d164ceea836f-utilities\") pod \"76b19c0f-1368-48de-8c33-d164ceea836f\" (UID: \"76b19c0f-1368-48de-8c33-d164ceea836f\") " Dec 03 00:43:51 crc kubenswrapper[3561]: I1203 00:43:51.883625 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76b19c0f-1368-48de-8c33-d164ceea836f-catalog-content\") pod \"76b19c0f-1368-48de-8c33-d164ceea836f\" (UID: \"76b19c0f-1368-48de-8c33-d164ceea836f\") " Dec 03 00:43:51 crc kubenswrapper[3561]: I1203 00:43:51.885358 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76b19c0f-1368-48de-8c33-d164ceea836f-utilities" (OuterVolumeSpecName: "utilities") pod "76b19c0f-1368-48de-8c33-d164ceea836f" (UID: "76b19c0f-1368-48de-8c33-d164ceea836f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:43:51 crc kubenswrapper[3561]: I1203 00:43:51.891382 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76b19c0f-1368-48de-8c33-d164ceea836f-kube-api-access-4wdlx" (OuterVolumeSpecName: "kube-api-access-4wdlx") pod "76b19c0f-1368-48de-8c33-d164ceea836f" (UID: "76b19c0f-1368-48de-8c33-d164ceea836f"). InnerVolumeSpecName "kube-api-access-4wdlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:43:51 crc kubenswrapper[3561]: I1203 00:43:51.985978 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4wdlx\" (UniqueName: \"kubernetes.io/projected/76b19c0f-1368-48de-8c33-d164ceea836f-kube-api-access-4wdlx\") on node \"crc\" DevicePath \"\"" Dec 03 00:43:51 crc kubenswrapper[3561]: I1203 00:43:51.986017 3561 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76b19c0f-1368-48de-8c33-d164ceea836f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:43:52 crc kubenswrapper[3561]: I1203 00:43:52.389933 3561 generic.go:334] "Generic (PLEG): container finished" podID="76b19c0f-1368-48de-8c33-d164ceea836f" containerID="785c9fdda039e85cbaa12dd647d50d882bc60b396ebf79ca25a4520c752e45c9" exitCode=0 Dec 03 00:43:52 crc kubenswrapper[3561]: I1203 00:43:52.389990 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59gkk" event={"ID":"76b19c0f-1368-48de-8c33-d164ceea836f","Type":"ContainerDied","Data":"785c9fdda039e85cbaa12dd647d50d882bc60b396ebf79ca25a4520c752e45c9"} Dec 03 00:43:52 crc kubenswrapper[3561]: I1203 00:43:52.390069 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59gkk" event={"ID":"76b19c0f-1368-48de-8c33-d164ceea836f","Type":"ContainerDied","Data":"71c8e59f8a35889243b975f49459d4c12baeb55ffde0c67661ac68c914be6a59"} Dec 03 00:43:52 crc kubenswrapper[3561]: I1203 00:43:52.390109 3561 scope.go:117] "RemoveContainer" containerID="785c9fdda039e85cbaa12dd647d50d882bc60b396ebf79ca25a4520c752e45c9" Dec 03 00:43:52 crc kubenswrapper[3561]: I1203 00:43:52.390101 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-59gkk" Dec 03 00:43:52 crc kubenswrapper[3561]: I1203 00:43:52.429961 3561 scope.go:117] "RemoveContainer" containerID="cec53e3681dc1d4b916676687d74c98c08c5bd60d1863918089438740cadd954" Dec 03 00:43:52 crc kubenswrapper[3561]: I1203 00:43:52.481979 3561 scope.go:117] "RemoveContainer" containerID="6d6d93096e9963a81ffb2725db3f59e9dd2089c708e39cd8e54d05d8d161d193" Dec 03 00:43:52 crc kubenswrapper[3561]: I1203 00:43:52.521423 3561 scope.go:117] "RemoveContainer" containerID="785c9fdda039e85cbaa12dd647d50d882bc60b396ebf79ca25a4520c752e45c9" Dec 03 00:43:52 crc kubenswrapper[3561]: E1203 00:43:52.522053 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"785c9fdda039e85cbaa12dd647d50d882bc60b396ebf79ca25a4520c752e45c9\": container with ID starting with 785c9fdda039e85cbaa12dd647d50d882bc60b396ebf79ca25a4520c752e45c9 not found: ID does not exist" containerID="785c9fdda039e85cbaa12dd647d50d882bc60b396ebf79ca25a4520c752e45c9" Dec 03 00:43:52 crc kubenswrapper[3561]: I1203 00:43:52.522189 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785c9fdda039e85cbaa12dd647d50d882bc60b396ebf79ca25a4520c752e45c9"} err="failed to get container status \"785c9fdda039e85cbaa12dd647d50d882bc60b396ebf79ca25a4520c752e45c9\": rpc error: code = NotFound desc = could not find container \"785c9fdda039e85cbaa12dd647d50d882bc60b396ebf79ca25a4520c752e45c9\": container with ID starting with 785c9fdda039e85cbaa12dd647d50d882bc60b396ebf79ca25a4520c752e45c9 not found: ID does not exist" Dec 03 00:43:52 crc kubenswrapper[3561]: I1203 00:43:52.522227 3561 scope.go:117] "RemoveContainer" containerID="cec53e3681dc1d4b916676687d74c98c08c5bd60d1863918089438740cadd954" Dec 03 00:43:52 crc kubenswrapper[3561]: E1203 00:43:52.522741 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cec53e3681dc1d4b916676687d74c98c08c5bd60d1863918089438740cadd954\": container with ID starting with cec53e3681dc1d4b916676687d74c98c08c5bd60d1863918089438740cadd954 not found: ID does not exist" containerID="cec53e3681dc1d4b916676687d74c98c08c5bd60d1863918089438740cadd954" Dec 03 00:43:52 crc kubenswrapper[3561]: I1203 00:43:52.522821 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cec53e3681dc1d4b916676687d74c98c08c5bd60d1863918089438740cadd954"} err="failed to get container status \"cec53e3681dc1d4b916676687d74c98c08c5bd60d1863918089438740cadd954\": rpc error: code = NotFound desc = could not find container \"cec53e3681dc1d4b916676687d74c98c08c5bd60d1863918089438740cadd954\": container with ID starting with cec53e3681dc1d4b916676687d74c98c08c5bd60d1863918089438740cadd954 not found: ID does not exist" Dec 03 00:43:52 crc kubenswrapper[3561]: I1203 00:43:52.522839 3561 scope.go:117] "RemoveContainer" containerID="6d6d93096e9963a81ffb2725db3f59e9dd2089c708e39cd8e54d05d8d161d193" Dec 03 00:43:52 crc kubenswrapper[3561]: E1203 00:43:52.523256 3561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d6d93096e9963a81ffb2725db3f59e9dd2089c708e39cd8e54d05d8d161d193\": container with ID starting with 6d6d93096e9963a81ffb2725db3f59e9dd2089c708e39cd8e54d05d8d161d193 not found: ID does not exist" containerID="6d6d93096e9963a81ffb2725db3f59e9dd2089c708e39cd8e54d05d8d161d193" Dec 03 00:43:52 crc kubenswrapper[3561]: I1203 00:43:52.523347 3561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6d93096e9963a81ffb2725db3f59e9dd2089c708e39cd8e54d05d8d161d193"} err="failed to get container status \"6d6d93096e9963a81ffb2725db3f59e9dd2089c708e39cd8e54d05d8d161d193\": rpc error: code = NotFound desc = could not find container \"6d6d93096e9963a81ffb2725db3f59e9dd2089c708e39cd8e54d05d8d161d193\": container with ID starting with 6d6d93096e9963a81ffb2725db3f59e9dd2089c708e39cd8e54d05d8d161d193 not found: ID does not exist" Dec 03 00:43:52 crc kubenswrapper[3561]: I1203 00:43:52.575265 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76b19c0f-1368-48de-8c33-d164ceea836f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76b19c0f-1368-48de-8c33-d164ceea836f" (UID: "76b19c0f-1368-48de-8c33-d164ceea836f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:43:52 crc kubenswrapper[3561]: I1203 00:43:52.597357 3561 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76b19c0f-1368-48de-8c33-d164ceea836f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:43:52 crc kubenswrapper[3561]: I1203 00:43:52.749066 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-59gkk"] Dec 03 00:43:52 crc kubenswrapper[3561]: I1203 00:43:52.756662 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-59gkk"] Dec 03 00:43:53 crc kubenswrapper[3561]: I1203 00:43:53.676097 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76b19c0f-1368-48de-8c33-d164ceea836f" path="/var/lib/kubelet/pods/76b19c0f-1368-48de-8c33-d164ceea836f/volumes" Dec 03 00:44:27 crc kubenswrapper[3561]: I1203 00:44:27.627577 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:44:27 crc kubenswrapper[3561]: I1203 00:44:27.629289 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:44:41 crc kubenswrapper[3561]: I1203 00:44:41.596611 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 03 00:44:41 crc kubenswrapper[3561]: I1203 00:44:41.597294 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 03 00:44:41 crc kubenswrapper[3561]: I1203 00:44:41.597356 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 03 00:44:41 crc kubenswrapper[3561]: I1203 00:44:41.597373 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 03 00:44:41 crc kubenswrapper[3561]: I1203 00:44:41.597394 3561 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 03 00:44:57 crc kubenswrapper[3561]: I1203 00:44:57.622946 3561 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:44:57 crc kubenswrapper[3561]: I1203 00:44:57.623572 3561 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.175582 3561 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7"] Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.176167 3561 topology_manager.go:215] "Topology Admit Handler" podUID="7cb72aa9-39b1-41ba-98cd-d95be2969444" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29412045-qgcz7" Dec 03 00:45:00 crc kubenswrapper[3561]: E1203 00:45:00.176600 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="76b19c0f-1368-48de-8c33-d164ceea836f" containerName="extract-utilities" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.176632 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b19c0f-1368-48de-8c33-d164ceea836f" containerName="extract-utilities" Dec 03 00:45:00 crc kubenswrapper[3561]: E1203 00:45:00.176678 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="76b19c0f-1368-48de-8c33-d164ceea836f" containerName="registry-server" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.176698 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b19c0f-1368-48de-8c33-d164ceea836f" containerName="registry-server" Dec 03 00:45:00 crc kubenswrapper[3561]: E1203 00:45:00.176732 3561 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="76b19c0f-1368-48de-8c33-d164ceea836f" containerName="extract-content" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.176747 3561 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b19c0f-1368-48de-8c33-d164ceea836f" containerName="extract-content" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.177047 3561 memory_manager.go:354] "RemoveStaleState removing state" podUID="76b19c0f-1368-48de-8c33-d164ceea836f" containerName="registry-server" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.178040 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.181646 3561 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.185498 3561 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.193940 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7"] Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.248596 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cb72aa9-39b1-41ba-98cd-d95be2969444-secret-volume\") pod \"collect-profiles-29412045-qgcz7\" (UID: \"7cb72aa9-39b1-41ba-98cd-d95be2969444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.249064 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cb72aa9-39b1-41ba-98cd-d95be2969444-config-volume\") pod \"collect-profiles-29412045-qgcz7\" (UID: \"7cb72aa9-39b1-41ba-98cd-d95be2969444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.249295 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plwln\" (UniqueName: \"kubernetes.io/projected/7cb72aa9-39b1-41ba-98cd-d95be2969444-kube-api-access-plwln\") pod \"collect-profiles-29412045-qgcz7\" (UID: \"7cb72aa9-39b1-41ba-98cd-d95be2969444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.350649 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cb72aa9-39b1-41ba-98cd-d95be2969444-secret-volume\") pod \"collect-profiles-29412045-qgcz7\" (UID: \"7cb72aa9-39b1-41ba-98cd-d95be2969444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.351107 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cb72aa9-39b1-41ba-98cd-d95be2969444-config-volume\") pod \"collect-profiles-29412045-qgcz7\" (UID: \"7cb72aa9-39b1-41ba-98cd-d95be2969444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.351443 3561 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-plwln\" (UniqueName: \"kubernetes.io/projected/7cb72aa9-39b1-41ba-98cd-d95be2969444-kube-api-access-plwln\") pod \"collect-profiles-29412045-qgcz7\" (UID: \"7cb72aa9-39b1-41ba-98cd-d95be2969444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.352816 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cb72aa9-39b1-41ba-98cd-d95be2969444-config-volume\") pod \"collect-profiles-29412045-qgcz7\" (UID: \"7cb72aa9-39b1-41ba-98cd-d95be2969444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.357467 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cb72aa9-39b1-41ba-98cd-d95be2969444-secret-volume\") pod \"collect-profiles-29412045-qgcz7\" (UID: \"7cb72aa9-39b1-41ba-98cd-d95be2969444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.373220 3561 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-plwln\" (UniqueName: \"kubernetes.io/projected/7cb72aa9-39b1-41ba-98cd-d95be2969444-kube-api-access-plwln\") pod \"collect-profiles-29412045-qgcz7\" (UID: \"7cb72aa9-39b1-41ba-98cd-d95be2969444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.505227 3561 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7" Dec 03 00:45:00 crc kubenswrapper[3561]: I1203 00:45:00.761937 3561 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7"] Dec 03 00:45:01 crc kubenswrapper[3561]: I1203 00:45:01.011652 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7" event={"ID":"7cb72aa9-39b1-41ba-98cd-d95be2969444","Type":"ContainerStarted","Data":"71cd0017c184f84b72a40b474a8ae05b694d41e9406bc315a04ca56bf89c7b9c"} Dec 03 00:45:02 crc kubenswrapper[3561]: I1203 00:45:02.020237 3561 generic.go:334] "Generic (PLEG): container finished" podID="7cb72aa9-39b1-41ba-98cd-d95be2969444" containerID="0ffa1f0e083525e4fbeb4ac154bc80a1db4a205373319d3377f122c9962eb016" exitCode=0 Dec 03 00:45:02 crc kubenswrapper[3561]: I1203 00:45:02.020436 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7" event={"ID":"7cb72aa9-39b1-41ba-98cd-d95be2969444","Type":"ContainerDied","Data":"0ffa1f0e083525e4fbeb4ac154bc80a1db4a205373319d3377f122c9962eb016"} Dec 03 00:45:03 crc kubenswrapper[3561]: I1203 00:45:03.331357 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7" Dec 03 00:45:03 crc kubenswrapper[3561]: I1203 00:45:03.508768 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plwln\" (UniqueName: \"kubernetes.io/projected/7cb72aa9-39b1-41ba-98cd-d95be2969444-kube-api-access-plwln\") pod \"7cb72aa9-39b1-41ba-98cd-d95be2969444\" (UID: \"7cb72aa9-39b1-41ba-98cd-d95be2969444\") " Dec 03 00:45:03 crc kubenswrapper[3561]: I1203 00:45:03.508947 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cb72aa9-39b1-41ba-98cd-d95be2969444-secret-volume\") pod \"7cb72aa9-39b1-41ba-98cd-d95be2969444\" (UID: \"7cb72aa9-39b1-41ba-98cd-d95be2969444\") " Dec 03 00:45:03 crc kubenswrapper[3561]: I1203 00:45:03.509012 3561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cb72aa9-39b1-41ba-98cd-d95be2969444-config-volume\") pod \"7cb72aa9-39b1-41ba-98cd-d95be2969444\" (UID: \"7cb72aa9-39b1-41ba-98cd-d95be2969444\") " Dec 03 00:45:03 crc kubenswrapper[3561]: I1203 00:45:03.509988 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cb72aa9-39b1-41ba-98cd-d95be2969444-config-volume" (OuterVolumeSpecName: "config-volume") pod "7cb72aa9-39b1-41ba-98cd-d95be2969444" (UID: "7cb72aa9-39b1-41ba-98cd-d95be2969444"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:45:03 crc kubenswrapper[3561]: I1203 00:45:03.514733 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb72aa9-39b1-41ba-98cd-d95be2969444-kube-api-access-plwln" (OuterVolumeSpecName: "kube-api-access-plwln") pod "7cb72aa9-39b1-41ba-98cd-d95be2969444" (UID: "7cb72aa9-39b1-41ba-98cd-d95be2969444"). InnerVolumeSpecName "kube-api-access-plwln". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:45:03 crc kubenswrapper[3561]: I1203 00:45:03.516471 3561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb72aa9-39b1-41ba-98cd-d95be2969444-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7cb72aa9-39b1-41ba-98cd-d95be2969444" (UID: "7cb72aa9-39b1-41ba-98cd-d95be2969444"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:45:03 crc kubenswrapper[3561]: I1203 00:45:03.610261 3561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-plwln\" (UniqueName: \"kubernetes.io/projected/7cb72aa9-39b1-41ba-98cd-d95be2969444-kube-api-access-plwln\") on node \"crc\" DevicePath \"\"" Dec 03 00:45:03 crc kubenswrapper[3561]: I1203 00:45:03.610308 3561 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cb72aa9-39b1-41ba-98cd-d95be2969444-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:45:03 crc kubenswrapper[3561]: I1203 00:45:03.610320 3561 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cb72aa9-39b1-41ba-98cd-d95be2969444-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:45:04 crc kubenswrapper[3561]: I1203 00:45:04.036671 3561 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7" event={"ID":"7cb72aa9-39b1-41ba-98cd-d95be2969444","Type":"ContainerDied","Data":"71cd0017c184f84b72a40b474a8ae05b694d41e9406bc315a04ca56bf89c7b9c"} Dec 03 00:45:04 crc kubenswrapper[3561]: I1203 00:45:04.036735 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71cd0017c184f84b72a40b474a8ae05b694d41e9406bc315a04ca56bf89c7b9c" Dec 03 00:45:04 crc kubenswrapper[3561]: I1203 00:45:04.036686 3561 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-qgcz7" Dec 03 00:45:04 crc kubenswrapper[3561]: I1203 00:45:04.409837 3561 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt"] Dec 03 00:45:04 crc kubenswrapper[3561]: I1203 00:45:04.418084 3561 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412000-zt5qt"] Dec 03 00:45:05 crc kubenswrapper[3561]: I1203 00:45:05.681293 3561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f46bfa4-9000-4c75-9e86-49671ca56ef0" path="/var/lib/kubelet/pods/4f46bfa4-9000-4c75-9e86-49671ca56ef0/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515113704036024445 0ustar coreroot‹íÁ  ÷Om7 €7šÞ'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015113704037017363 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015113677044016515 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015113677044015465 5ustar corecore